Author Archives: Robert Graham

OMG The Stupid It Burns

This article, pointed out by @TheGrugq, is stupid enough that it's worth rebutting.




The article starts with the question "Why did the lessons of Stuxnet, Wannacry, Heartbleed and Shamoon go unheeded?". It then proceeds to ignore the lessons of those things.

Some of the actual lessons should be things like how Stuxnet crossed air gaps, how Wannacry spread through flat Windows networking, how Heartbleed comes from technical debt, and how Shamoon furthers state aims by causing damage.

But this article doesn't cover the technical lessons. Instead, it thinks the lesson should be the moral lesson, that we should take these things more seriously. But that's stupid. It's the sort of lesson people teach you that know nothing about the topic. When you have nothing of value to contribute to a topic you can always take the moral high road and criticize everyone for being morally weak for not taking it more seriously. Obviously, since doctors haven't cured cancer yet, it's because they don't take the problem seriously.

The article continues to ignore the lesson of these cyber attacks and instead regales us with a list of military lessons from WW I and WW II. This makes the same flaw that many in the military make, trying to understand cyber through analogies with the real world. It's not that such lessons could have no value, it's that this article contains a poor list of them. It seems to consist of a random list of events that appeal to the author rather than events that have bearing on cybersecurity.

Then, in case we don't get the point, the article bullies us with hyperbole, cliches, buzzwords, bombastic language, famous quotes, and citations. It's hard to see how most of them actually apply to the text. Rather, it seems like they are included simply because he really really likes them.

The article invests much effort in discussing the buzzword "OODA loop". Most attacks in cyberspace don't have one. Instead, attackers flail around, trying lots of random things, overcoming defense with brute-force rather than an understanding of what's going on. That's obviously the case with Wannacry: it was an accident, with the perpetrator experimenting with what would happen if they added the ETERNALBLUE exploit to their existing ransomware code. The consequence was beyond anybody's ability to predict.

You might claim that this is just the first stage, that they'll loop around, observe Wannacry's effects, orient themselves, decide, then act upon what they learned. Nope. Wannacry burned the exploit. It's essentially removed any vulnerable systems from the public Internet, thereby making it impossible to use what they learned. It's still active a year later, with infected systems behind firewalls busily scanning the Internet so that if you put a new system online that's vulnerable, it'll be taken offline within a few hours, before any other evildoer can take advantage of it.

See what I'm doing here? Learning the actual lessons of things like Wannacry? The thing the above article fails to do??

The article has a humorous paragraph on "defense in depth", misunderstanding the term. To be fair, it's the cybersecurity industry's fault: they adopted then redefined the term. That's why there's two separate articles on Wikipedia: one for the old military term (as used in this article) and one for the new cybersecurity term.

As used in the cybersecurity industry, "defense in depth" means having multiple layers of security. Many organizations put all their defensive efforts on the perimeter, and none inside a network. The idea of "defense in depth" is to put more defenses inside the network. For example, instead of just one firewall at the edge of the network, put firewalls inside the network to segment different subnetworks from each other, so that a ransomware infection in the customer support computers doesn't spread to sales and marketing computers.

The article talks about exploiting WiFi chips to bypass the defense in depth measures like browser sandboxes. This is conflating different types of attacks. A WiFi attack is usually considered a local attack, from somebody next to you in bar, rather than a remote attack from a server in Russia. Moreover, far from disproving "defense in depth" such WiFi attacks highlight the need for it. Namely, phones need to be designed so that successful exploitation of other microprocessors (namely, the WiFi, Bluetooth, and cellular baseband chips) can't directly compromise the host system. In other words, once exploited with "Broadpwn", a hacker would need to extend the exploit chain with another vulnerability in the hosts Broadcom WiFi driver rather than immediately exploiting a DMA attack across PCIe. This suggests that if PCIe is used to interface to peripherals in the phone that an IOMMU be used, for "defense in depth".

Cybersecurity is a young field. There are lots of useful things that outsider non-techies can teach us. Lessons from military history would be well-received.

But that's not this story. Instead, this story is by an outsider telling us we don't know what we are doing, that they do, and then proceeds to prove they don't know what they are doing. Their argument is based on a moral suasion and bullying us with what appears on the surface to be intellectual rigor, but which is in fact devoid of anything smart.

My fear, here, is that I'm going to be in a meeting where somebody has read this pretentious garbage, explaining to me why "defense in depth" is wrong and how we need to OODA faster. I'd rather nip this in the bud, pointing out if you found anything interesting from that article, you are wrong.








Notes on setting up Raspberry Pi 3 as WiFi hotspot

I want to sniff the packets for IoT devices. There are a number of ways of doing this, but one straightforward mechanism is configuring a "Raspberry Pi 3 B" as a WiFi hotspot, then running tcpdump on it to record all the packets that pass through it. Google gives lots of results on how to do this, but they all demand that you have the precise hardware, WiFi hardware, and software that the authors do, so that's a pain.


I got it working using the instructions here. There are a few additional notes, which is why I'm writing this blogpost, so I remember them.
https://www.raspberrypi.org/documentation/configuration/wireless/access-point.md

I'm using the RPi-3-B and not the RPi-3-B+, and the latest version of Raspbian at the time of this writing, "Raspbian Stretch Lite 2018-3-13".

Some things didn't work as described. The first is that it couldn't find the package "hostapd". That solution was to run "apt-get update" a second time.

The second problem was error message about the NAT not working when trying to set the masquerade rule. That's because the 'upgrade' updates the kernel, making the running system out-of-date with the files on the disk. The solution to that is make sure you reboot after upgrading.

Thus, what you do at the start is:

apt-get update
apt-get upgrade
apt-get update
shutdown -r now

Then it's just "apt-get install tcpdump" and start capturing on wlan0. This will get the non-monitor-mode Ethernet frames, which is what I want.


My letter urging Georgia governor to veto anti-hacking bill

April 16, 2018

Office of the Governor
206 Washington Street
111 State Capitol
Atlanta, Georgia 30334


Re: SB 315

Dear Governor Deal:

I am writing to urge you to veto SB315, the "Unauthorized Computer Access" bill.

The cybersecurity community, of which Georgia is a leader, is nearly unanimous that SB315 will make cybersecurity worse. You've undoubtedly heard from many of us opposing this bill. It does not help in prosecuting foreign hackers who target Georgian computers, such as our elections systems. Instead, it prevents those who notice security flaws from pointing them out, thereby getting them fixed. This law violates the well-known Kirchhoff's Principle, that instead of secrecy and obscurity, that security is achieved through transparency and openness.

That the bill contains this flaw is no accident. The justification for this bill comes from an incident where a security researcher noticed a Georgia state election system had made voter information public. This remained unfixed, months after the vulnerability was first disclosed, leaving the data exposed. Those in charge decided that it was better to prosecute those responsible for discovering the flaw rather than punish those who failed to secure Georgia voter information, hence this law.

Too many security experts oppose this bill for it to go forward. Signing this bill, one that is weak on cybersecurity by favoring political cover-up over the consensus of the cybersecurity community, will be part of your legacy. I urge you instead to veto this bill, commanding the legislature to write a better one, this time consulting experts, which due to Georgia's thriving cybersecurity community, we do not lack.

Thank you for your attention.

Sincerely,
Robert Graham
(formerly) Chief Scientist, Internet Security Systems

Let’s stop talking about password strength

Picture from EFF -- CC-BY license
Near the top of most security recommendations is to use "strong passwords". We need to stop doing this.

Yes, weak passwords can be a problem. If a website gets hacked, weak passwords are easier to crack. It's not that this is wrong advice.

On the other hand, it's not particularly good advice, either. It's far down the list of important advice that people need to remember. "Weak passwords" are nowhere near the risk of "password reuse". When your Facebook or email account gets hacked, it's because you used the same password across many websites, not because you used a weak password.

Important websites, where the strength of your password matters, already take care of the problem. They use strong, salted hashes on the backend to protect the password. On the frontend, they force passwords to be a certain length and a certain complexity. Maybe the better advice is to not trust any website that doesn't enforce stronger passwords (minimum of 8 characters consisting of both letters and non-letters).

To some extent, this "strong password" advice has become obsolete. A decade ago, websites had poor protection (MD5 hashes) and no enforcement of complexity, so it was up to the user to choose strong passwords. Now that important websites have changed their behavior, such as using bcrypt, there is less onus on the user.


But the real issue here is that "strong password" advice reflects the evil, authoritarian impulses of the infosec community. Instead of measuring insecurity in terms of costs vs. benefits, risks vs. rewards, we insist that it's an issue of moral weakness. We pretend that flaws happen because people are greedy, lazy, and ignorant. We pretend that security is its own goal, a benefit we should achieve, rather than a cost we must endure.

We like giving moral advice because it's easy: just be "stronger". Discussing "password reuse" is more complicated, forcing us discuss password managers, writing down passwords on paper, that it's okay to reuse passwords for crappy websites you don't care about, and so on.

What I'm trying to say is that the moral weakness here is us. Rather then give pertinent advice we give lazy advice. We give the advice that victim shames them for being weak while pretending that we are strong.

So stop telling people to use strong passwords. It's crass advice on your part and largely unhelpful for your audience, distracting them from the more important things.

Why the crypto-backdoor side is morally corrupt

Crypto-backdoors for law enforcement is a reasonable position, but the side that argues for it adds things that are either outright lies or morally corrupt. Every year, the amount of digital evidence law enforcement has to solve crimes increases, yet they outrageously lie, claiming they are "going dark", losing access to evidence. A weirder claim is that  those who oppose crypto-backdoors are nonetheless ethically required to make them work. This is morally corrupt.

That's the point of this Lawfare post, which claims:
What I am saying is that those arguing that we should reject third-party access out of hand haven’t carried their research burden. ... There are two reasons why I think there hasn’t been enough research to establish the no-third-party access position. First, research in this area is “taboo” among security researchers. ... the second reason why I believe more research needs to be done: the fact that prominent non-government experts are publicly willing to try to build secure third-party-access solutions should make the information-security community question the consensus view. 
This is nonsense. It's like claiming we haven't cured the common cold because researchers haven't spent enough effort at it. When researchers claim they've tried 10,000 ways to make something work, it's like insisting they haven't done enough because they haven't tried 10,001 times.

Certainly, half the community doesn't want to make such things work. Any solution for the "legitimate" law enforcement of the United States means a solution for illegitimate states like China and Russia which would use the feature to oppress their own people. Even if I believe it's a net benefit to the United States, I would never attempt such research because of China and Russia.

But computer scientists notoriously ignore ethics in pursuit of developing technology. That describes the other half of the crypto community who would gladly work on the problem. The reason they haven't come up with solutions is because the problem is hard, really hard.

The second reason the above argument is wrong: it says we should believe a solution is possible because some outsiders are willing to try. But as Yoda says, do or do not, there is no try. Our opinions on the difficulty of the problem don't change simply because people are trying. Our opinions change when people are succeeding. People are always trying the impossible, that's not evidence it's possible.

The paper cherry picks things, like Intel CPU features, to make it seem like they are making forward progress. No. Intel's SGX extensions are there for other reasons. Sure, it's a new development, and new developments may change our opinion on the feasibility of law enforcement backdoors. But nowhere in talking about this new development have they actually proposes a solution to the backdoor problem. New developments happen all the time, and the pro-backdoor side is going to seize upon each and every one to claim that this, finally, solves the backdoor problem, without showing exactly how it solves the problem.

The Lawfare post does make one good argument, that there is no such thing as "absolute security", and thus the argument is stupid that "crypto-backdoors would be less than absolute security". Too often in the cybersecurity community we reject solutions that don't provide "absolute security" while failing to acknowledge that "absolute security" is impossible.

But that's not really what's going on here. Cryptographers aren't certain we've achieved even "adequate security" with current crypto regimes like SSL/TLS/HTTPS. Every few years we find horrible flaws in the old versions and have to develop new versions. If you steal somebody's iPhone today, it's so secure you can't decrypt anything on it. But then if you hold it for 5 years, somebody will eventually figure out a hole and then you'll be able to decrypt it -- a hole that won't affect Apple's newer phones.

The reason we think we can't get crypto-backdoors correct is simply because we can't get crypto completely correct. It's implausible that we can get the backdoors working securely when we still have so much trouble getting encryption working correctly in the first place.

Thus, we aren't talking about "insignificantly less security", we are talking about going from "barely adequate security" to "inadequate security". Negotiating keys between you and a website is hard enough without simultaneously having to juggle keys with law enforcement organizations.

And finally, even if cryptographers do everything correctly law enforcement themselves haven't proven themselves reliable. The NSA exposed its exploits (like the infamous ETERNALBLUE), and OPM lost all its security clearance records. If they can't keep those secrets, it's unreasonable to believe they can hold onto backdoor secrets. One of the problems cryptographers are expected to solve is partly this, to make it work in a such way that makes it unlikely law enforcement will lose its secrets.

Summary

This argument by the pro-backdoor side, that we in the crypto-community should do more to solve backdoors, it simply wrong. We've spent a lot of effort at this already. Many continue to work on this problem -- the reason you haven't heard much from them is because they haven't had much success. It's like blaming doctors for not doing more to work on interrogation drugs (truth serums). Sure, a lot of doctors won't work on this because it's distasteful, but at the same time, there are many drug companies who would love to profit by them. The reason they don't exist is not because they aren't spending enough money researching them, it's because there is no plausible solution in sight.

Crypto-backdoors designed for law-enforcement will significantly harm your security. This may change in the future, but that's the state of crypto today. You should trust the crypto experts on this, not lawyers.

WannaCry after one year

In the news, Boeing (an aircraft maker) has been "targeted by a WannaCry virus attack". Phrased this way, it's implausible. There are no new attacks targeting people with WannaCry. There is either no WannaCry, or it's simply a continuation of the attack from a year ago.


It's possible what happened is that an anti-virus product called a new virus "WannaCry". Virus families are often related, and sometimes a distant relative gets called the same thing. I know this watching the way various anti-virus products label my own software, which isn't a virus, but which virus writers often include with their own stuff. The Lazarus group, which is believed to be responsible for WannaCry, have whole virus families like this. Thus, just because an AV product claims you are infected with WannaCry doesn't mean it's the same thing that everyone else is calling WannaCry.

Famously, WannaCry was the first virus/ransomware/worm that used the NSA ETERNALBLUE exploit. Other viruses have since added the exploit, and of course, hackers use it when attacking systems. It may be that a network intrusion detection system detected ETERNALBLUE, which people then assumed was due to WannaCry. It may actually have been an nPetya infection instead (nPetya was the second major virus/worm/ransomware to use the exploit).

Or it could be the real WannaCry, but it's probably not a new "attack" that "targets" Boeing. Instead, it's likely a continuation from WannaCry's first appearance. WannaCry is a worm, which means it spreads automatically after it was launched, for years, without anybody in control. Infected machines still exist, unnoticed by their owners, attacking random machines on the Internet. If you plug in an unpatched computer onto the raw Internet, without the benefit of a firewall, it'll get infected within an hour.

However, the Boeing manufacturing systems that were infected were not on the Internet, so what happened? The narrative from the news stories imply some nefarious hacker activity that "targeted" Boeing, but that's unlikely.

We have now have over 15 years of experience with network worms getting into strange places disconnected and even "air gapped" from the Internet. The most common reason is laptops. Somebody takes their laptop to some place like an airport WiFi network, and gets infected. They put their laptop to sleep, then wake it again when they reach their destination, and plug it into the manufacturing network. At this point, the virus spreads and infects everything. This is especially the case with maintenance/support engineers, who often have specialized software they use to control manufacturing machines, for which they have a reason to connect to the local network even if it doesn't have useful access to the Internet. A single engineer may act as a sort of Typhoid Mary, going from customer to customer, infecting each in turn whenever they open their laptop.

Another cause for infection is virtual machines. A common practice is to take "snapshots" of live machines and save them to backups. Should the virtual machine crash, instead of rebooting it, it's simply restored from the backed up running image. If that backup image is infected, then bringing it out of sleep will allow the worm to start spreading.

Jake Williams claims he's seen three other manufacturing networks infected with WannaCry. Why does manufacturing seem more susceptible? The reason appears to be the "killswitch" that stops WannaCry from running elsewhere. The killswitch uses a DNS lookup, stopping itself if it can resolve a certain domain. Manufacturing networks are largely disconnected from the Internet enough that such DNS lookups don't work, so the domain can't be found, so the killswitch doesn't work. Thus, manufacturing systems are no more likely to get infected, but the lack of killswitch means the virus will continue to run, attacking more systems instead of immediately killing itself.

One solution to this would be to setup sinkhole DNS servers on the network that resolve all unknown DNS queries to a single server that logs all requests. This is trivially setup with most DNS servers. The logs will quickly identify problems on the network, as well as any hacker or virus activity. The side effect is that it would make this killswitch kill WannaCry. WannaCry isn't sufficient reason to setup sinkhole servers, of course, but it's something I've found generally useful in the past.

Conclusion

Something obviously happened to the Boeing plant, but the narrative is all wrong. Words like "targeted attack" imply things that likely didn't happen. Facts are so loose in cybersecurity that it may not have even been WannaCry.

The real story is that the original WannaCry is still out there, still trying to spread. Simply put a computer on the raw Internet (without a firewall) and you'll get attacked. That, somehow, isn't news. Instead, what's news is whenever that continued infection hits somewhere famous, like Boeing, even though (as Boeing claims) it had no important effect.

What John Oliver gets wrong about Bitcoin

John Oliver covered bitcoin/cryptocurrencies last night. I thought I'd describe a bunch of things he gets wrong.


How Bitcoin works

Nowhere in the show does it describe what Bitcoin is and how it works.

Discussions should always start with Satoshi Nakamoto's original paper. The thing Satoshi points out is that there is an important cost to normal transactions, namely, the entire legal system designed to protect you against fraud, such as the way you can reverse the transactions on your credit card if it gets stolen. The point of Bitcoin is that there is no way to reverse a charge. A transaction is done via cryptography: to transfer money to me, you decrypt it with your secret key and encrypt it with mine, handing ownership over to me with no third party involved that can reverse the transaction, and essentially no overhead.

All the rest of the stuff, like the decentralized blockchain and mining, is all about making that work.

Bitcoin crazies forget about the original genesis of Bitcoin. For example, they talk about adding features to stop fraud, reversing transactions, and having a central authority that manages that. This misses the point, because the existing electronic banking system already does that, and does a better job at it than cryptocurrencies ever can. If you want to mock cryptocurrencies, talk about the "DAO", which did exactly that -- and collapsed in a big fraudulent scheme where insiders made money and outsiders didn't.

Sticking to Satoshi's original ideas are a lot better than trying to repeat how the crazy fringe activists define Bitcoin.

How does any money have value?

Oliver's answer is currencies have value because people agree that they have value, like how they agree a Beanie Baby is worth $15,000.

This is wrong. A better way of asking the question why the value of money changes. The dollar has been losing roughly 2% of its value each year for decades. This is called "inflation", as the dollar loses value, it takes more dollars to buy things, which means the price of things (in dollars) goes up, and employers have to pay us more dollars so that we can buy the same amount of things.

The reason the value of the dollar changes is largely because the Federal Reserve manages the supply of dollars, using the same law of Supply and Demand. As you know, if a supply decreases (like oil), then the price goes up, or if the supply of something increases, the price goes down. The Fed manages money the same way: when prices rise (the dollar is worth less), the Fed reduces the supply of dollars, causing it to be worth more. Conversely, if prices fall (or don't rise fast enough), the Fed increases supply, so that the dollar is worth less.

The reason money follows the law of Supply and Demand is because people use money, they consume it like they do other goods and services, like gasoline, tax preparation, food, dance lessons, and so forth. It's not like a fine art painting, a stamp collection or a Beanie Baby -- money is a product. It's just that people have a hard time thinking of it as a consumer product since, in their experience, money is what they use to buy consumer products. But it's a symmetric operation: when you buy gasoline with dollars, you are actually selling dollars in exchange for gasoline. That you call one side in this transaction "money" and the other "goods" is purely arbitrary, you call gasoline money and dollars the good that is being bought and sold for gasoline.

The reason dollars is a product is because trying to use gasoline as money is a pain in the neck. Storing it and exchanging it is difficult. Goods like this do become money, such as famously how prisons often use cigarettes as a medium of exchange, even for non-smokers, but it has to be a good that is fungible, storable, and easily exchanged. Dollars are the most fungible, the most storable, and the easiest exchanged, so has the most value as "money". Sure, the mechanic can fix the farmers car for three chickens instead, but most of the time, both parties in the transaction would rather exchange the same value using dollars than chickens.

So the value of dollars is not like the value of Beanie Babies, which people might buy for $15,000, which changes purely on the whims of investors. Instead, a dollar is like gasoline, which obey the law of Supply and Demand.

This brings us back to the question of where Bitcoin gets its value. While Bitcoin is indeed used like dollars to buy things, that's only a tiny use of the currency, so therefore it's value isn't determined by Supply and Demand. Instead, the value of Bitcoin is a lot like Beanie Babies, obeying the laws of investments. So in this respect, Oliver is right about where the value of Bitcoin comes, but wrong about where the value of dollars comes from.

Why Bitcoin conference didn't take Bitcoin

John Oliver points out the irony of a Bitcoin conference that stopped accepting payments in Bitcoin for tickets.

The biggest reason for this is because Bitcoin has become so popular that transaction fees have gone up. Instead of being proof of failure, it's proof of popularity. What John Oliver is saying is the old joke that nobody goes to that popular restaurant anymore because it's too crowded and you can't get a reservation.

Moreover, the point of Bitcoin is not to replace everyday currencies for everyday transactions. If you read Satoshi Nakamoto's whitepaper, it's only goal is to replace certain types of transactions, like purely electronic transactions where electronic goods and services are being exchanged. Where real-life goods/services are being exchanged, existing currencies work just fine. It's only the crazy activists who claim Bitcoin will eventually replace real world currencies -- the saner people see it co-existing with real-world currencies, each with a different value to consumers.

Turning a McNugget back into a chicken

John Oliver uses the metaphor of turning a that while you can process a chicken into McNuggets, you can't reverse the process. It's a funny metaphor.

But it's not clear what the heck this metaphor is trying explain. That's not a metaphor for the blockchain, but a metaphor for a "cryptographic hash", where each block is a chicken, and the McNugget is the signature for the block (well, the block plus the signature of the last block, forming a chain).

Even then that metaphor as problems. The McNugget produced from each chicken must be unique to that chicken, for the metaphor to accurately describe a cryptographic hash. You can therefore identify the original chicken simply by looking at the McNugget. A slight change in the original chicken, like losing a feather, results in a completely different McNugget. Thus, nuggets can be used to tell if the original chicken has changed.

This then leads to the key property of the blockchain, it is unalterable. You can't go back and change any of the blocks of data, because the fingerprints, the nuggets, will also change, and break the nugget chain.

The point is that while John Oliver is laughing at a silly metaphor to explain the blockchain becuase he totally misses the point of the metaphor.

Oliver rightly says "don't worry if you don't understand it -- most people don't", but that includes the big companies that John Oliver name. Some companies do get it, and are producing reasonable things (like JP Morgan, by all accounts), but some don't. IBM and other big consultancies are charging companies millions of dollars to consult with them on block chain products where nobody involved, the customer or the consultancy, actually understand any of it. That doesn't stop them from happily charging customers on one side and happily spending money on the other.

Thus, rather than Oliver explaining the problem, he's just being part of the problem. His explanation of blockchain left you dumber than before.

ICO's

John Oliver mocks the Brave ICO ($35 million in 30 seconds), claiming it's all driven by YouTube personalities and people who aren't looking at the fundamentals.

And while this is true, most ICOs are bunk, the  Brave ICO actually had a business model behind it. Brave is a Chrome-like web-browser whose distinguishing feature is that it protects your privacy from advertisers. If you don't use Brave or a browser with an ad block extension, you have no idea how bad things are for you. However, this presents a problem for websites that fund themselves via advertisements, which is most of them, because visitors no longer see ads. Brave has a fix for this. Most people wouldn't mind supporting the websites they visit often, like the New York Times. That's where the Brave ICO "token" comes in: it's not simply stock in Brave, but a token for micropayments to websites. Users buy tokens, then use them for micropayments to websites like New York Times. The New York Times then sells the tokens back to the market for dollars. The buying and selling of tokens happens without a centralized middleman.

This is still all speculative, of course, and it remains to be seen how successful Brave will be, but it's a serious effort. It has well respected VC behind the company, a well-respected founder (despite the fact he invented JavaScript), and well-respected employees. It's not a scam, it's a legitimate venture.

How to you make money from Bitcoin?

The last part of the show is dedicated to describing all the scam out there, advising people to be careful, and to be "responsible". This is garbage.

It's like my simple two step process to making lots of money via Bitcoin: (1) buy when the price is low, and (2) sell when the price is high. My advice is correct, of course, but useless. Same as "be careful" and "invest responsibly".

The truth about investing in cryptocurrencies is "don't". The only responsible way to invest is to buy low-overhead market index funds and hold for retirement. No, you won't get super rich doing this, but anything other than this is irresponsible gambling.

It's a hard lesson to learn, because everyone is telling you the opposite. The entire channel CNBC is devoted to day traders, who buy and sell stocks at a high rate based on the same principle as a ponzi scheme, basing their judgment not on the fundamentals (like long term dividends) but animal spirits of whatever stock is hot or cold at the moment. This is the same reason people buy or sell Bitcoin, not because they can describe the fundamental value, but because they believe in a bigger fool down the road who will buy it for even more.

For things like Bitcoin, the trick to making money is to have bought it over 7 years ago when it was essentially worthless, except to nerds who were into that sort of thing. It's the same tick to making a lot of money in Magic: The Gathering trading cards, which nerds bought decades ago which are worth a ton of money now. Or, to have bought Apple stock back in 2009 when the iPhone was new, when nerds could understand the potential of real Internet access and apps that Wall Street could not.

That was my strategy: be a nerd, who gets into things. I've made a good amount of money on all these things because as a nerd, I was into Magic: The Gathering, Bitcoin, and the iPhone before anybody else was, and bought in at the point where these things were essentially valueless.

At this point with cryptocurrencies, with the non-nerds now flooding the market, there little chance of making it rich. The lottery is probably a better bet. Instead, if you want to make money, become a nerd, obsess about a thing, understand a thing when its new, and cash out once the rest of the market figures it out. That might be Brave, for example, but buy into it because you've spent the last year studying the browser advertisement ecosystem, the market's willingness to pay for content, and how their Basic Attention Token delivers value to websites -- not because you want in on the ICO craze.

Conclusion

John Oliver spends 25 minutes explaining Bitcoin, Cryptocurrencies, and the Blockchain to you. Sure, it's funny, but it leaves you worse off than when it started. It admits they "simplify" the explanation, but they simplified it so much to the point where they removed all useful information.

Some notes on memcached DDoS

I thought I'd write up some notes on the memcached DDoS. Specifically, I describe how many I found scanning the Internet with masscan, and how to use masscan as a killswitch to neuter the worst of the attacks.


Test your servers

I added code to my port scanner for this, then scanned the Internet:

masscan 0.0.0.0/0 -pU:11211 --banners | grep memcached

This example scans the entire Internet (/0). Replaced 0.0.0.0/0 with your address range (or ranges).

This produces output that looks like this:

Banner on port 11211/udp on 172.246.132.226: [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on 89.110.149.218: [memcached] uptime=3935192 time=1520485363 version=1.4.17
Banner on port 11211/udp on 172.246.132.226: [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on 84.200.45.2: [memcached] uptime=399858 time=1520485362 version=1.4.20
Banner on port 11211/udp on 5.1.66.2: [memcached] uptime=29429482 time=1520485363 version=1.4.20
Banner on port 11211/udp on 103.248.253.112: [memcached] uptime=2879363 time=1520485366 version=1.2.6
Banner on port 11211/udp on 193.240.236.171: [memcached] uptime=42083736 time=1520485365 version=1.4.13

The "banners" check filters out those with valid memcached responses, so you don't get other stuff that isn't memcached. To filter this output further, use  the 'cut' to grab just column 6:

... | cut -d ' ' -f 6 | cut -d: -f1

You often get multiple responses to just one query, so you'll want to sort/uniq the list:

... | sort | uniq


My results from an Internet wide scan

I got 15181 results (or roughly 15,000).

People are using Shodan to find a list of memcached servers. They might be getting a lot results back that response to TCP instead of UDP. Only UDP can be used for the attack.

Other researchers scanned the Internet a few days ago and found ~31k. I don't know if this means people have been removing these from the Internet.

Masscan as exploit script

BTW, you can not only use masscan to find amplifiers, you can also use it to carry out the DDoS. Simply import the list of amplifier IP addresses, then spoof the source address as that of the target. All the responses will go back to the source address.

masscan -iL amplifiers.txt -pU:11211 --spoof-ip --rate 100000

I point this out to show how there's no magic in exploiting this. Numerous exploit scripts have been released, because it's so easy.


Why memcached servers are vulnerable

Like many servers, memcached listens to local IP address 127.0.0.1 for local administration. By listening only on the local IP address, remote people cannot talk to the server.

However, this process is often buggy, and you end up listening on either 0.0.0.0 (all interfaces) or on one of the external interfaces. There's a common Linux network stack issue where this keeps happening, like trying to get VMs connected to the network. I forget the exact details, but the point is that lots of servers that intend to listen only on 127.0.0.1 end up listening on external interfaces instead. It's not a good security barrier.

Thus, there are lots of memcached servers listening on their control port (11211) on external interfaces.

How the protocol works

The protocol is documented here. It's pretty straightforward.

The easiest amplification attacks is to send the "stats" command. This is 15 byte UDP packet that causes the server to send back either a large response full of useful statistics about the server.  You often see around 10 kilobytes of response across several packets.

A harder, but more effect attack uses a two step process. You first use the "add" or "set" commands to put chunks of data into the server, then send a "get" command to retrieve it. You can easily put 100-megabytes of data into the server this way, and causes a retrieval with a single "get" command.

That's why this has been the largest amplification ever, because a single 100-byte packet can in theory cause a 100-megabytes response.

Doing the math, the 1.3 terabit/second DDoS divided across the 15,000 servers I found vulnerable on the Internet leads to an average of 100-megabits/second per server. This is fairly minor, and is indeed something even small servers (like Raspberry Pis) can generate.

Neutering the attack ("kill switch")

If they are using the more powerful attack against you, you can neuter it: you can send a "flush_all" command back at the servers who are flooding you, causing them to drop all those large chunks of data from the cache.

I'm going to describe how I would do this.

First, get a list of attackers, meaning, the amplifiers that are flooding you. The way to do this is grab a packet sniffer and capture all packets with a source port of 11211. Here is an example using tcpdump.

tcpdump -i -w attackers.pcap src port 11221

Let that run for a while, then hit [ctrl-c] to stop, then extract the list of IP addresses in the capture file. The way I do this is with tshark (comes with Wireshark):

tshark -r attackers.pcap -Tfields -eip.src | sort | uniq > amplifiers.txt

Now, craft a flush_all payload. There are many ways of doing this. For example, if you are using nmap or masscan, you can add the bytes to the nmap-payloads.txt file. Also, masscan can read this directly from a packet capture file. To do this, first craft a packet, such as with the following command line foo:

echo -en "\x00\x00\x00\x00\x00\x01\x00\x00flush_all\r\n" | nc -q1 -u 11211

Capture this packet using tcpdump or something, and save into a file "flush_all.pcap". If you want to skip this step, I've already done this for you, go grab the file from GitHub:


Now that we have our list of attackers (amplifiers.txt) and a payload to blast at them (flush_all.pcap), use masscan to send it:

masscan -iL amplifiers.txt -pU:112211 --pcap-payload flush_all.pcap

Reportedly, "shutdown" may also work to completely shutdown the amplifiers. I'll leave that as an exercise for the reader, since of course you'll be adversely affecting the servers.

Some notes

Here are some good reading on this attack:







AskRob: Does Tor let government peek at vuln info?

On Twitter, somebody asked this question:



The question is about a blog post that claims Tor privately tips off the government about vulnerabilities, using as proof a "vulnerability" from October 2007 that wasn't made public until 2011.

The tl;dr is that it's bunk. There was no vulnerability, it was a feature request. The details were already public. There was no spy agency involved, but the agency that does Voice of America, and which tries to protect activists under foreign repressive regimes.

Discussion

The issue is that Tor traffic looks like Tor traffic, making it easy to block/censor, or worse, identify users. Over the years, Tor has added features to make it look more and more like normal traffic, like the encrypted traffic used by Facebook, Google, and Apple. Tors improves this bit-by-bit over time, but short of actually piggybacking on website traffic, it will always leave some telltale signature.

An example showing how we can distinguish Tor traffic is the packet below, from the latest version of the Tor server:


Had this been Google or Facebook, the names would be something like "www.google.com" or "facebook.com". Or, had this been a normal "self-signed" certificate, the names would still be recognizable. But Tor creates randomized names, with letters and numbers, making it distinctive. It's hard to automate detection of this, because it's only probably Tor (other self-signed certificates look like this, too), which means you'll have occasional "false-positives". But still, if you compare this to the pattern of traffic, you can reliably detect that Tor is happening on your network.

This has always been a known issue, since the earliest days. Google the search term "detect tor traffic", and set your advanced search dates to before 2007, and you'll see lots of discussion about this, such as this post for writing intrusion-detection signatures for Tor.

Among the things you'll find is this presentation from 2006 where its creator (Roger Dingledine) talks about how Tor can be identified on the network with its unique network fingerprint. For a "vulnerability" they supposedly kept private until 2011, they were awfully darn public about it.



The above blogpost claims Tor kept this vulnerability secret until 2011 by citing this message. It's because Levine doesn't understand the terminology and is just blindly searching for an exact match for "TLS normalization". Here's an earlier proposed change for the long term goal of to "make our connection handshake look closer to a regular HTTPS [TLS] connection", from February 2007. Here is another proposal from October 2007 on changing TLS certificates, from days after the email discussion (after they shipped the feature, presumably).

What we see here is here is a known problem from the very beginning of the project, a long term effort to fix that problem, and a slow dribble of features added over time to preserve backwards compatibility.

Now let's talk about the original train of emails cited in the blogpost. It's hard to see the full context here, but it sounds like BBG made a feature request to make Tor look even more like normal TLS, which is hinted with the phrase "make our funders happy". Of course the people giving Tor money are going to ask for improvements, and of course Tor would in turn discuss those improvements with the donor before implementing them. It's common in project management: somebody sends you a feature request, you then send the proposal back to them to verify what you are building is what they asked for.

As for the subsequent salacious paragraph about "secrecy", that too is normal. When improving a problem, you don't want to talk about the details until after you have a fix. But note that this is largely more for PR than anything else. The details on how to detect Tor are available to anybody who looks for them -- they just aren't readily accessible to the layman. For example, Tenable Networks announced the previous month exactly this ability to detect Tor's traffic, because any techy wanting to would've found the secrets how to. Indeed, Teneble's announcement may have been the impetus for BBG's request to Tor: "can you fix it so that this new Tenable feature no longer works".

To be clear, there are zero secret "vulnerability details" here that some secret spy agency could use to detect Tor. They were already known, and in the Teneble product, and within the grasp of any techy who wanted to discover them. A spy agency could just buy Teneble, or copy it, instead of going through this intricate conspiracy.

Conclusion

The issue isn't a "vulnerability". Tor traffic is recognizable on the network, and over time, they make it less and less recognizable. Eventually they'll just piggyback on true HTTPS and convince CloudFlare to host ingress nodes, or something, making it completely undetectable. In the meanwhile, it leaves behind fingerprints, as I showed above.

What we see in the email exchanges is the normal interaction of a donor asking for a feature, not a private "tip off". It's likely the donor is the one who tipped off Tor, pointing out Tenable's product to detect Tor.

Whatever secrets Tor could have tipped off to the "secret spy agency" were no more than what Tenable was already doing in a shipping product.




Update: People are trying to make it look like Voice of America is some sort of intelligence agency. That's a conspiracy theory. It's not a member of the American intelligence community. You'd have to come up with a solid reason explaining why the United States is hiding VoA's membership in the intelligence community, or you'd have to believe that everything in the U.S. government is really just some arm of the C.I.A.

Blame privacy activists for the Memo??

Former FBI agent Asha Rangappa @AshaRangappa_ has a smart post debunking the Nunes Memo, then takes it all back again with an op-ed on the NYTimes blaming us privacy activists. She presents an obviously false narrative that the FBI and FISA courts are above suspicion.

I know from first hand experience the FBI is corrupt. In 2007, they threatened me, trying to get me to cancel a talk that revealed security vulnerabilities in a large corporation's product. Such abuses occur because there is no transparency and oversight. FBI agents write down our conversation in their little notebooks instead of recording it, so that they can control the narrative of what happened, presenting their version of the converstion (leaving out the threats). In this day and age of recording devices, this is indefensible.

She writes "I know firsthand that it’s difficult to get a FISA warrant". Yes, the process was difficult for her, an underling, to get a FISA warrant. The process is different when a leader tries to do the same thing.

I know this first hand having casually worked as an outsider with intelligence agencies. I saw two processes in place: one for the flunkies, and one for those above the system. The flunkies constantly complained about how there is too many process in place oppressing them, preventing them from getting their jobs done. The leaders understood the system and how to sidestep those processes.

That's not to say the Nunes Memo has merit, but it does point out that privacy advocates have a point in wanting more oversight and transparency in such surveillance of American citizens.

Blaming us privacy advocates isn't the way to go. It's not going to succeed in tarnishing us, but will push us more into Trump's camp, causing us to reiterate that we believe the FBI and FISA are corrupt.

The problematic Wannacry North Korea attribution

Last month, the US government officially "attributed" the Wannacry ransomware worm to North Korea. This attribution has three flaws, which are a good lesson for attribution in general.

It was an accident

The most important fact about Wannacry is that it was an accident. We've had 30 years of experience with Internet worms teaching us that worms are always accidents. While launching worms may be intentional, their effects cannot be predicted. While they appear to have targets, like Slammer against South Korea, or Witty against the Pentagon, further analysis shows this was just a random effect that was impossible to predict ahead of time. Only in hindsight are these effects explainable.

We should hold those causing accidents accountable, too, but it's a different accountability. The U.S. has caused more civilian deaths in its War on Terror than the terrorists caused triggering that war. But we hold these to be morally different: the terrorists targeted the innocent, whereas the U.S. takes great pains to avoid civilian casualties. 

Since we are talking about blaming those responsible for accidents, we also must include the NSA in that mix. The NSA created, then allowed the release of, weaponized exploits. That's like accidentally dropping a load of unexploded bombs near a village. When those bombs are then used, those having lost the weapons are held guilty along with those using them. Yes, while we should blame the hacker who added ETERNAL BLUE to their ransomware, we should also blame the NSA for losing control of ETERNAL BLUE.


A country and its assets are different

Was it North Korea, or hackers affilliated with North Korea? These aren't the same.

It's hard for North Korea to have hackers of its own. It doesn't have citizens who grow up with computers to pick from. Moreover, an internal hacking corps would create tainted citizens exposed to dangerous outside ideas. Update: Some people have pointed out that Kim Il-sung University in the capital does have some contact with the outside world, with academics granted limited Internet access, so I guess some tainting is allowed. Still, what we know of North Korea hacking efforts largley comes from hackers they employ outside North Korea. It was the Lazurus Group, outside North Korea, that did Wannacry.

Instead, North Korea develops external hacking "assets", supporting several external hacking groups in China, Japan, and South Korea. This is similar to how intelligence agencies develop human "assets" in foreign countries. While these assets do things for their handlers, they also have normal day jobs, and do many things that are wholly independent and even sometimes against their handler's interests.

For example, this Muckrock FOIA dump shows how "CIA assets" independently worked for Castro and assassinated a Panamanian president. That they also worked for the CIA does not make the CIA responsible for the Panamanian assassination.

That CIA/intelligence assets work this way is well-known and uncontroversial. The fact that countries use hacker assets like this is the controversial part. These hackers do act independently, yet we refuse to consider this when we want to "attribute" attacks.


Attribution is political

We have far better attribution for the nPetya attacks. It was less accidental (they clearly desired to disrupt Ukraine), and the hackers were much closer to the Russian government (Russian citizens). Yet, the Trump administration isn't fighting Russia, they are fighting North Korea, so they don't officially attribute nPetya to Russia, but do attribute Wannacry to North Korea.

Trump is in conflict with North Korea. He is looking for ways to escalate the conflict. Attributing Wannacry helps achieve his political objectives.

That it was blatantly politics is demonstrated by the way it was released to the press. It wasn't released in the normal way, where the administration can stand behind it, and get challenged on the particulars. Instead, it was pre-released through the normal system of "anonymous government officials" to the NYTimes, and then backed up with op-ed in the Wall Street Journal. The government leaks information like this when it's weak, not when its strong.

The proper way is to release the evidence upon which the decision was made, so that the public can challenge it. Among the questions the public would ask is whether it they believe it was North Korea's intention to cause precisely this effect, such as disabling the British NHS. Or, whether it was merely hackers "affiliated" with North Korea, or hackers carrying out North Korea's orders. We cannot challenge the government this way because the government intentionally holds itself above such accountability.


Conclusion

We believe hacking groups tied to North Korea are responsible for Wannacry. Yet, even if that's true, we still have three attribution problems. We still don't know if that was intentional, in pursuit of some political goal, or an accident. We still don't know if it was at the direction of North Korea, or whether their hacker assets acted independently. We still don't know if the government has answers to these questions, or whether it's exploiting this doubt to achieve political support for actions against North Korea.


"Skyfall attack" was attention seeking

After the Meltdown/Spectre attacks, somebody created a website promising related "Skyfall/Solace" attacks. They revealed today that it was a "hoax".

It was a bad hoax. It wasn't a clever troll, parody, or commentary. It was childish behavior seeking attention.

For all you hate naming of security vulnerabilities, Meltdown/Spectre was important enough to deserve a name. Sure, from an infosec perspective, it was minor, we just patch and move on. But from an operating-system and CPU design perspective, these things where huge.

Page table isolation to fix Meltdown is a fundamental redesign of the operating system. What you learned in college about how Solaris, Windows, Linux, and BSD were designed is now out-of-date. It's on the same scale of change as address space randomization.

The same is true of Spectre. It changes what capabilities are given to JavaScript (buffers and high resolution timers). It dramatically increases the paranoia we have of running untrusted code from the Internet. We've been cleansing JavaScript of things like buffer-overflows and type confusion errors, now we have to cleanse it of branch prediction issues.

Moreover, not only do we need to change software, we need to change the CPU. No, we won't get rid of branch-prediction and out-of-order execution, but there things that can easily be done to mitigate these attacks. We won't be recalling the billions of CPUs already shipped, and it will take a year before fixed CPUs appear on the market, but it's still an important change. That we fix security through such a massive hardware change is by itself worthy of "names".

Yes, the "naming" of vulnerabilities is annoying. A bunch of vulns named by their creators have disappeared, and we've stopped talking about them. On the other hand, we still talk about Heartbleed and Shellshock, because they were damn important. A decade from now, we'll still be talking about Meltdown/Spectre. Even if they hadn't been named by their creators, we still would've come up with nicknames to talk about them, because CVE numbers are so inconvenient.

Thus, the hoax's mocking of the naming is invalid. It was largely incoherent rambling from somebody who really doesn't understand the importance of these vulns, who uses the hoax to promote themselves.

Some notes on Meltdown/Spectre

I thought I'd write up some notes.

You don't have to worry if you patch. If you download the latest update from Microsoft, Apple, or Linux, then the problem is fixed for you and you don't have to worry. If you aren't up to date, then there's a lot of other nasties out there you should probably also be worrying about. I mention this because while this bug is big in the news, it's probably not news the average consumer needs to concern themselves with.

This will force a redesign of CPUs and operating systems. While not a big news item for consumers, it's huge in the geek world. We'll need to redesign operating systems and how CPUs are made.

Don't worry about the performance hit. Some, especially avid gamers, are concerned about the claims of "30%" performance reduction when applying the patch. That's only in some rare cases, so you shouldn't worry too much about it. As far as I can tell, 3D games aren't likely to see less than 1% performance degradation. If you imagine your game is suddenly slower after the patch, then something else broke it.

This wasn't foreseeable. A common cliche is that such bugs happen because people don't take security seriously, or that they are taking "shortcuts". That's not the case here. Speculative execution and timing issues with caches are inherent issues with CPU hardware. "Fixing" this would make CPUs run ten times slower. Thus, while we can tweek hardware going forward, the larger change will be in software.

There's no good way to disclose this. The cybersecurity industry has a process for coordinating the release of such bugs, which appears to have broken down. In truth, it didn't. Once Linus announced a security patch that would degrade performance of the Linux kernel, we knew the coming bug was going to be Big. Looking at the Linux patch, tracking backwards to the bug was only a matter of time. Hence, the release of this information was a bit sooner than some wanted. This is to be expected, and is nothing to be upset about.

It helps to have a name. Many are offended by the crassness of naming vulnerabilities and giving them logos. On the other hand, we are going to be talking about these bugs for the next decade. Having a recognizable name, rather than a hard-to-remember number, is useful.

Should I stop buying Intel? Intel has the worst of the bugs here. On the other hand, ARM and AMD alternatives have their own problems. Many want to deploy ARM servers in their data centers, but these are likely to expose bugs you don't see on x86 servers. The software fix, "page table isolation", seems to work, so there might not be anything to worry about. On the other hand, holding up purchases because of "fear" of this bug is a good way to squeeze price reductions out of your vendor. Conversely, later generation CPUs, "Haswell" and even "Skylake" seem to have the least performance degradation, so it might be time to upgrade older servers to newer processors.

Intel misleads. Intel has a press release that implies they are not impacted any worse than others. This is wrong: the "Meltdown" issue appears to apply only to Intel CPUs. I don't like such marketing crap, so I mention it.




Statements from companies:










Why Meltdown exists

So I thought I'd answer this question. I'm not a "chipmaker", but I've been optimizing low-level assembly x86 assembly language for a couple of decades.





The tl;dr version is this: the CPUs have no bug. The results are correct, it's just that the timing is different. CPU designers will never fix the general problem of undetermined timing.

CPUs are deterministic in the results they produce. If you add 5+6, you always get 11 -- always. On the other hand, the amount of time they take is non-deterministic. Run a benchmark on your computer. Now run it again. The amount of time it took varies, for a lot of reasons.

That CPUs take an unknown amount of time is an inherent problem in CPU design. Even if you do everything right, "interrupts" from clock timers and network cards will still cause undefined timing problems. Therefore, CPU designers have thrown the concept of "deterministic time" out the window.

The biggest source of non-deterministic behavior is the high-speed memory cache on the chip. When a piece of data is in the cache, the CPU accesses it immediately. When it isn't, the CPU has to stop and wait for slow main memory. Other things happening in the system impacts the cache, unexpectedly evicting recently used data for one purpose in favor of data for another purpose.

Hackers love "non-deterministic", because while such things are unknowable in theory, they are often knowable in practice.

That's the case of the granddaddy of all hacker exploits, the "buffer overflow". From the programmer's perspective, the bug will result in just the software crashing for undefinable reasons. From the hacker's perspective, they reverse engineer what's going on underneath, then carefully craft buffer contents so the program doesn't crash, but instead continue to run the code the hacker supplies within the buffer. Buffer overflows are undefined in theory, well-defined in practice.

Hackers have already been exploiting this defineable/undefinable timing problems with the cache for a long time. An example is cache timing attacks on AES. AES reads a matrix from memory as it encrypts things. By playing with the cache, evicting things, timing things, you can figure out the pattern of memory accesses, and hence the secret key.

Such cache timing attacks have been around since the beginning, really, and it's simply an unsolvable problem. Instead, we have workarounds, such as changing our crypto algorithms to not depend upon cache, or better yet, implement them directly in the CPU (such as the Intel AES specialized instructions).


What's happened today with Meltdown is that incompletely executed instructions, which discard their results, do affect the cache. We can then recover those partial/temporary/discarded results by measuring the cache timing. This has been known for a while, but we couldn't figure out how to successfully exploit this, as this paper from Anders Fogh reports. Hackers fixed this, making it practically exploitable.

As a CPU designer, Intel has few good options.

Fixing cache timing attacks is an impossibility. They can do some tricks, such as allowing some software to reserve part of the cache for private use, for special crypto operations, but the general problem is unsolvable.

Fixing the "incomplete results" problem from affecting the cache is also difficult. Intel has the fastest CPUs, and the reason is such speculative execution. The other CPU designers have the same problem: fixing the three problems identified today would cause massive performance issues. They'll come up with improvements, probably, but not complete solutions.

Instead, the fix is within the operating system. Frankly, it's a needed change that should've been done a decade ago. They've just been putting it off because of the performance hit. Now that the change has been forced to happen, CPU designers will probably figure out ways to mitigate the performance cost.


Thus, the Intel CPU you buy a year from now will have some partial fixes for these exactly problems without addressing the larger security concerns. They will also have performance enhancements to make the operating system patches faster.

But the underlying theoretical problem will never be solved, and is essentially unsolvable.

Let’s see if I’ve got Metldown right

I thought I'd write down the proof-of-concept to see if I got it right.

So the Meltdown paper lists the following steps:

 ; flush cache
 ; rcx = kernel address
 ; rbx = probe array
 retry:
 mov al, byte [rcx]
 shl rax, 0xc
 jz retry
 mov rbx, qword [rbx + rax]
 ; measure which of 256 cachelines were accessed

So the first step is to flush the cache, so that none of the 256 possible cache lines in our "probe array" are in the cache. There are many ways this can be done.

Now pick a byte of secret kernel memory to read. Presumably, we'll just read all of memory, one byte at a time. The address of this byte is in rcx.

Now execute the instruction:
    mov al, byte [rcx]
This line of code will crash (raise an exception). That's because [rcx] points to secret kernel memory which we don't have permission to read. The value of the real al (the low-order byte of rax) will never actually change.

But fear not! Intel is massively out-of-order. That means before the exception happens, it will provisionally and partially execute the following instructions. While Intel has only 16 visible registers, it actually has 100 real registers. It'll stick the result in a pseudo-rax register. Only at the end of the long execution change, if nothing bad happen, will pseudo-rax register become the visible rax register.

But in the meantime, we can continue (with speculative execution) operate on pseudo-rax. Right now it contains a byte, so we need to make it bigger so that instead of referencing which byte it can now reference which cache-line. (This instruction multiplies by 4096 instead of just 64, to prevent the prefetcher from loading multiple adjacent cache-lines).
 shl rax, 0xc

Now we use pseudo-rax to provisionally load the indicated bytes.
 mov rbx, qword [rbx + rax]

Since we already crashed up top on the first instruction, these results will never be committed to rax and rbx. However, the cache will change. Intel will have provisionally loaded that cache-line into memory.

At this point, it's simply a matter of stepping through all 256 cache-lines in order to find the one that's fast (already in the cache) where all the others are slow.


Bitcoin: In Crypto We Trust

Tim Wu, who coined "net neutrality", has written an op-ed on the New York Times called "The Bitcoin Boom: In Code We Trust". He is wrong about "code".

The wrong "trust"

Wu builds a big manifesto about how real-world institutions can't be trusted. Certainly, this reflects the rhetoric from a vocal wing of Bitcoin fanatics, but it's not the Bitcoin manifesto.

Instead, the word "trust" in the Bitcoin paper is much narrower, referring to how online merchants can't trust credit-cards (for example). When I bought school supplies for my niece when she studied in Canada, the online site wouldn't accept my U.S. credit card. They didn't trust my credit card. However, they trusted my Bitcoin, so I used that payment method instead, and succeeded in the purchase.

Real-world currencies like dollars are tethered to the real-world, which means no single transaction can be trusted, because "they" (the credit-card company, the courts, etc.) may decide to reverse the transaction. The manifesto behind Bitcoin is that a transaction cannot be reversed -- and thus, can always be trusted.

Deliberately confusing the micro-trust in a transaction and macro-trust in banks and governments is a sort of bait-and-switch.

The wrong inspiration

Wu claims:
"It was, after all, a carnival of human errors and misfeasance that inspired the invention of Bitcoin in 2009, namely, the financial crisis."
Not true. Bitcoin did not appear fully formed out of the void, but was instead based upon a series of innovations that predate the financial crisis by a decade. Moreover, the financial crisis had little to do with "currency". The value of the dollar and other major currencies were essentially unscathed by the crisis. Certainly, enthusiasts looking backward like to cherry pick the financial crisis as yet one more reason why the offline world sucks, but it had little to do with Bitcoin.

In crypto we trust

It's not in code that Bitcoin trusts, but in crypto. Satoshi makes that clear in one of his posts on the subject:
A generation ago, multi-user time-sharing computer systems had a similar problem. Before strong encryption, users had to rely on password protection to secure their files, placing trust in the system administrator to keep their information private. Privacy could always be overridden by the admin based on his judgment call weighing the principle of privacy against other concerns, or at the behest of his superiors. Then strong encryption became available to the masses, and trust was no longer required. Data could be secured in a way that was physically impossible for others to access, no matter for what reason, no matter how good the excuse, no matter what.
You don't possess Bitcoins. Instead, all the coins are on the public blockchain under your "address". What you possess is the secret, private key that matches the address. Transferring Bitcoin means using your private key to unlock your coins and transfer them to another. If you print out your private key on paper, and delete it from the computer, it can never be hacked.

Trust is in this crypto operation. Trust is in your private crypto key.

We don't trust the code

The manifesto "in code we trust" has been proven wrong again and again. We don't trust computer code (software) in the cryptocurrency world.

The most profound example is something known as the "DAO" on top of Ethereum, Bitcoin's major competitor. Ethereum allows "smart contracts" containing code. The quasi-religious manifesto of the DAO smart-contract is that the "code is the contract", that all the terms and conditions are specified within the smart-contract code, completely untethered from real-world terms-and-conditions.

Then a hacker found a bug in the DAO smart-contract and stole most of the money.

In principle, this is perfectly legal, because "the code is the contract", and the hacker just used the code. In practice, the system didn't live up to this. The Ethereum core developers, acting as central bankers, rewrote the Ethereum code to fix this one contract, returning the money back to its original owners. They did this because those core developers were themselves heavily invested in the DAO and got their money back.

Similar things happen with the original Bitcoin code. A disagreement has arisen about how to expand Bitcoin to handle more transactions. One group wants smaller and "off-chain" transactions. Another group wants a "large blocksize". This caused a "fork" in Bitcoin with two versions, "Bitcoin" and "Bitcoin Cash". The fork championed by the core developers (central bankers) is worth around $20,000 right now, while the other fork is worth around $2,000.

So it's still "in central bankers we trust", it's just that now these central bankers are mostly online instead of offline institutions. They have proven to be even more corrupt than real-world central bankers. It's certainly not the code that is trusted.

The bubble

Wu repeats the well-known reference to Amazon during the dot-com bubble. If you bought Amazon's stock for $107 right before the dot-com crash, it still would be one of wisest investments you could've made. Amazon shares are now worth around $1,200 each.

The implication is that Bitcoin, too, may have such long term value. Even if you buy it today and it crashes tomorrow, it may still be worth ten-times its current value in another decade or two.

This is a poor analogy, for three reasons.

The first reason is that we knew the Internet had fundamentally transformed commerce. We knew there were going to be winners in the long run, it was just a matter of picking who would win (Amazon) and who would lose (Pets.com). We have yet to prove Bitcoin will be similarly transformative.

The second reason is that businesses are real, they generate real income. While the stock price may include some irrational exuberance, it's ultimately still based on the rational expectations of how much the business will earn. With Bitcoin, it's almost entirely irrational exuberance -- there are no long term returns.

The third flaw in the analogy is that there are an essentially infinite number of cryptocurrencies. We saw this today as Coinbase started trading Bitcoin Cash, a fork of Bitcoin. The two are nearly identical, so there's little reason one should be so much valuable than another. It's only a fickle fad that makes one more valuable than another, not business fundamentals. The successful future cryptocurrency is unlikely to exist today, but will be invented in the future.

The lessons of the dot-com bubble is not that Bitcoin will have long term value, but that cryptocurrency companies like Coinbase and BitPay will have long term value. Or, the lesson is that "old" companies like JPMorgan that are early adopters of the technology will grow faster than their competitors.

Conclusion

The point of Wu's paper is to distinguish trust in traditional real-world institutions and trust in computer software code. This is an inaccurate reading of the situation.

Bitcoin is not about replacing real-world institutions but about untethering online transactions.

The trust in Bitcoin is in crypto -- the power crypto gives individuals instead of third-parties.

The trust is not in the code. Bitcoin is a "cryptocurrency" not a "codecurrency".

Libertarians are against net neutrality

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. "Net neutrality" is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.



That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn't it.

This thing they call "net neutrality" is just left-wing politics masquerading as some sort of principle. It's no different than how people claim to be "pro-choice", yet demand forced vaccinations. Or, it's no different than how people claim to believe in "traditional marriage" even while they are on their third "traditional marriage".

Properly defined, "net neutrality" means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox's cloud backup or BitTorrent's peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about "net neutrality" than Trump or Gingrich care about "traditional marriage".

Instead, when people say "net neutrality", they mean "government regulation". It's the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC "Open Internet" order and reclassification of broadband under "Title II" so they can regulate it. Trump's FCC is putting broadband back to "Title I", which means the FCC can't regulate most of its "Open Internet" order.

Don't be tricked into thinking the "Open Internet" order is anything but intensely politically. The premise behind the order is the Democrat's firm believe that it's government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate "tactics" to hurt consumers, and continues with similar language throughout the order.


A good contrast to this can be seen in Tim Wu's non-political original paper in 2003 that coined the term "net neutrality". Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC's order can be seen in this comment from one of the commissioners who voted for those rules:
FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. "We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy," Commissioner Rosenworcel said.
This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.


But what about monopolies? After all, while the free-market may work when there's competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there's often only only a single credible high-speed broadband provider. But this isn't the issue at stake here. The FCC isn't proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC's reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won't go so far as to set prices, that it's only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren't. Why stop at regulating only half the evil?

The answer is that the claim "monopoly" power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 "Open Internet" order the FCC simultaneously redefined what they meant by "broadband", upping the speed from 5-mbps to 25-mbps. That's because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It's a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it's monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

"But corporations are indeed evil", people argue, "see here's a list of evil things they have done in the past!"

No, those things weren't evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common "net neutrality abuses" that people mention is AT&T's blocking of FaceTime. I've debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a "net neutrality" issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It's disingenuous to claim it's an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It's disingenuous to cite the "net neutrality" principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast's throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there's limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn't mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they'd vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was "network management". AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the "network management" issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It's problematic because it's designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it's bad when confronted with mobile bandwidth caps.

With small mobile devices, you don't want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That's the reasoning behind T-Mobile's offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they'll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, "zero-rating" certain video providers and degrading video quality, the FCC allows this, because they recognize it's in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can't just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they've become hostile to this innovation. This attitude is highlighted by the statement from the "Open Internet" order:
And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”
This is a clear declaration that free-market doesn't work and won't correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of "net neutrality".

The 2015 "Open Internet" order is not about "treating network traffic neutrally", because it doesn't do that. Instead, it's purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It's not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most "network management" issues. Even if it were just about wired broadband (like Comcast), it's still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this "net neutrality" slogan, you've got to do better than mindlessly repeating the arguments of the left-wing. The term itself, "net neutrality", is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the "net neutrality" technical principle that all traffic should be treated equally, then you'll want a rewrite of the "Open Internet" order.

In the end, while libertarians may still support some form of broadband regulation, it's impossible to reconcile libertarianism with the 2015 "Open Internet", or the vague things people mean by the slogan "net neutrality".

A Thanksgiving Carol: How Those Smart Engineers at Twitter Screwed Me

Thanksgiving Holiday is a time for family and cheer. Well, a time for family. It's the holiday where we ask our doctor relatives to look at that weird skin growth, and for our geek relatives to fix our computers. This tale is of such computer support, and how the "smart" engineers at Twitter have ruined this for life.

My mom is smart, but not a good computer user. I get my enthusiasm for science and math from my mother, and she has no problem understanding the science of computers. She keeps up when I explain Bitcoin. But she has difficulty using computers. She has this emotional, irrational belief that computers are out to get her.

This makes helping her difficult. Every problem is described in terms of what the computer did to her, not what she did to her computer. It's the computer that needs to be fixed, instead of the user. When I showed her the "haveibeenpwned.com" website (part of my tips for securing computers), it showed her Tumblr password had been hacked. She swore she never created a Tumblr account -- that somebody or something must have done it for her. Except, I was there five years ago and watched her create it.

Another example is how GMail is deleting her emails for no reason, corrupting them, and changing the spelling of her words. She emails the way an impatient teenager texts -- all of us in the family know the misspellings are not GMail's fault. But I can't help her with this because she keeps her GMail inbox clean, deleting all her messages, leaving no evidence behind. She has only a vague description of the problem that I can't make sense of.

This last March, I tried something to resolve this. I configured her GMail to send a copy of all incoming messages to a new, duplicate account on my own email server. With evidence in hand, I would then be able solve what's going on with her GMail. I'd be able to show her which steps she took, which buttons she clicked on, and what caused the weirdness she's seeing.

Today, while the family was in a state of turkey-induced torpor, my mom brought up a problem with Twitter. She doesn't use Twitter, she doesn't have an account, but they keep sending tweets to her phone, about topics like Denzel Washington. And she said something about "peaches" I didn't understand.

This is how the problem descriptions always start, chaotic, with mutually exclusive possibilities. If you don't use Twitter, you don't have the Twitter app installed, so how are you getting Tweets? Over much gnashing of teeth, it comes out that she's getting emails from Twitter, not tweets, about Denzel Washington -- to someone named "Peaches Graham". Naturally, she can only describe these emails, because she's already deleted them.

"Ah ha!", I think. I've got the evidence! I'll just log onto my duplicate email server, and grab the copies to prove to her it was something she did.

I find she is indeed receiving such emails, called "Moments", about topics trending on Twitter. They are signed with "DKIM", proving they are legitimate rather than from a hacker or spammer. The only way that can happen is if my mother signed up for Twitter, despite her protestations that she didn't.

I look further back and find that there were also confirmation messages involved. Back in August, she got a typical Twitter account signup message. I am now seeing a little bit more of the story unfold with this "Peaches Graham" name on the account. It wasn't my mother who initially signed up for Twitter, but Peaches, who misspelled the email address. It's one of the reasons why the confirmation process exists, to make sure you spelled your email address correctly.

It's now obvious my mom accidentally clicked on the [Confirm] button. I don't have any proof she did, but it's the only reasonable explanation. Otherwise, she wouldn't have gotten the "Moments" messages. My mom disputed this, emphatically insisting she never clicked on the emails.

It's at this point that I made a great mistake, saying:

"This sort of thing just doesn't happen. Twitter has very smart engineers. What's the chance they made the mistake here, or...".

I recognized condescension of words as they came out of my mouth, but dug myself deeper with:

"...or that the user made the error?"

This was wrong to say even if I were right. I have no excuse. I mean, maybe I could argue that it's really her fault, for not raising me right, but no, this is only on me.

Regardless of what caused the Twitter emails, the problem needs to be fixed. The solution is to take control of the Twitter account by using the password reset feature. I went to the Twitter login page, clicked on "Lost Password", got the password reset message, and reset the password. I then reconfigured the account to never send anything to my mom again.

But when I logged in I got an error saying the account had not yet been confirmed. I paused. The family dog eyed me in wise silence. My mom hadn't clicked on the [Confirm] button -- the proof was right there. Moreover, it hadn't been confirmed for a long time, since the account was created in 2011.

I interrogated my mother some more. It appears that this has been going on for years. She's just been deleting the emails without opening them, both the "Confirmations" and the "Moments". She made it clear she does it this way because her son (that would be me) instructs her to never open emails she knows are bad. That's how she could be so certain she never clicked on the [Confirm] button -- she never even opens the emails to see the contents.

My mom is a prolific email user. In the last eight months, I've received over 10,000 emails in the duplicate mailbox on my server. That's a lot. She's technically retired, but she volunteers for several charities, goes to community college classes, and is joining an anti-Trump protest group. She has a daily routine for triaging and processing all the emails that flow through her inbox.

So here's the thing, and there's no getting around it: my mom was right, on all particulars. She had done nothing, the computer had done it to her. It's Twitter who is at fault, having continued to resend that confirmation email every couple months for six years. When Twitter added their controversial "Moments" feature a couple years back, somehow they turned on Notifications for accounts that technically didn't fully exist yet.

Being right this time means she might be right the next time the computer does something to her without her touching anything. My attempts at making computers seem rational has failed. That they are driven by untrustworthy spirits is now a reasonable alternative.

Those "smart" engineers at Twitter screwed me. Continuing to send confirmation emails for six years is stupid. Sending Notifications to unconfirmed accounts is stupid. Yes, I know at the bottom of the message it gives a "Not my account" selection that she could have clicked on, but it's small and easily missed. In any case, my mom never saw that option, because she's been deleting the messages without opening them -- for six years.

Twitter can fix their problem, but it's not going to help mine. Forever more, I'll be unable to convince my mom that the majority of her problems are because of user error, and not because the computer people are out to get her.


Don Jr.: I’ll bite

So Don Jr. tweets the following, which is an excellent troll. So I thought I'd bite. The reason is I just got through debunk Democrat claims about NetNeutrality, so it seems like a good time to balance things out and debunk Trump nonsense.

The issue here is not which side is right. The issue here is whether you stand for truth, or whether you'll seize any factoid that appears to support your side, regardless of the truthfulness of it. The ACLU obviously chose falsehoods, as I documented. In the following tweet, Don Jr. does the same.

It's a preview of the hyperpartisan debates are you are likely to have across the dinner table tomorrow, which each side trying to outdo the other in the false-hoods they'll claim.

What we see in this number is a steady trend of these statistics since the Great Recession, with no evidence in the graphs showing how Trump has influenced these numbers, one way or the other.

Stock markets at all time highs

This is true, but it's obviously not due to Trump. The stock markers have been steadily rising since the Great Recession. Trump has done nothing substantive to change the market trajectory. Also, he hasn't inspired the market to change it's direction.


To be fair to Don Jr., we've all been crediting (or blaming) presidents for changes in the stock market despite the fact they have almost no influence over it. Presidents don't run the economy, it's an inappropriate conceit. The most influence they've had is in harming it.

Lowest jobless claims since 73

Again, let's graph this:


As we can see, jobless claims have been on a smooth downward trajectory since the Great Recession. It's difficult to see here how President Trump has influenced these numbers.

6 Trillion added to the economy

What he's referring to is that assets have risen in value, like the stock market, homes, gold, and even Bitcoin.

But this is a well known fallacy known as Mercantilism, believing the "economy" is measured by the value of its assets. This was debunked by Adam Smith in his book "The Wealth of Nations", where he showed instead the the "economy" is measured by how much it produces (GDP - Gross Domestic Product) and not assets.

GDP has grown at 3.0%, which is pretty good compared to the long term trend, and is better than Europe or Japan (though not as good as China). But Trump doesn't deserve any credit for this -- today's rise in GDP is the result of stuff that happened years ago.

Assets have risen by $6 trillion, but that's not a good thing. After all, when you sell your home for more money, the buyer has to pay more. So one person is better off and one is worse off, so the net effect is zero.

Actually, such asset price increase is a worrisome indicator -- we are entering into bubble territory. It's the result of a loose monetary policy, low interest rates and "quantitative easing" that was designed under the Obama administration to stimulate the economy. That's why all assets are rising in value. Normally, a rise in one asset means a fall in another, like selling gold to pay for houses. But because of loose monetary policy, all assets are increasing in price. The amazing rise in Bitcoin over the last year is as much a result of this bubble growing in all assets as it is to an exuberant belief in Bitcoin.

When this bubble collapses, which may happen during Trump's term, it'll really be the Obama administration who is to blame. I mean, if Trump is willing to take credit for the asset price bubble now, I'm willing to give it to him, as long as he accepts the blame when it crashes.

1.5 million fewer people on food stamps

As you'd expect, I'm going to debunk this with a graph: the numbers have been falling since the great recession. Indeed, in the previous period under Obama, 1.9 fewer people got off food stamps, so Trump's performance is slight ahead rather than behind Obama. Of course, neither president is really responsible.

Consumer confidence through the roof

Again we are going to graph this number:


Again we find nothing in the graph that suggests President Trump is responsible for any change -- it's been improving steadily since the Great Recession.

One thing to note is that, technically, it's not "through the roof" -- it still quite a bit below the roof set during the dot-com era.

Lowest Unemployment rate in 17 years

Again, let's simply graph it over time and look for Trump's contribution. as we can see, there doesn't appear to be anything special Trump has done -- unemployment has steadily been improving since the Great Recession.


But here's the thing, the "unemployment rate" only measures those looking for work, not those who have given up. The number that concerns people more is the "labor force participation rate". The Great Recession kicked a lot of workers out of the economy.


Mostly this is because Baby Boomer are now retiring an leaving the workforce, and some have chosen to retire early rather than look for another job. But there are still some other problems in our economy that cause this. President Trump has nothing particular in order to solve these problems.

Conclusion

As we see, Don Jr's tweet is a troll. When we look at the graphs of these indicators going back to the Great Recession, we don't see how President Trump has influenced anything. The improvements this year are in line with the improvements last year, which are in turn inline with the improvements in the previous year.

To be fair, all parties credit their President with improvements during their term. President Obama's supporters did the same thing. But at least right now, with these numbers, we can see that there's no merit to anything in Don Jr's tweet.

The hyperpartisan rancor in this country is because neither side cares about the facts. We should care. We should care that these numbers suck, even if we are Republicans. Conversely, we should care that those NetNeutrality claims by Democrats suck, even if we are Democrats.



NetNeutrality vs. limiting FaceTime

People keep retweeting this ACLU graphic in regards to NetNeutrality. In this post, I debunk the fourth item. In previous posts [1] [2] I debunk other items.


But here's the thing: the FCC allowed these restrictions, despite the FCC's "Open Internet" order forbidding such things. In other words, despite the graphic's claims it "happened without net neutrality rules", the opposite is true, it happened with net neutrality rules.

The FCC explains why they allowed it in their own case study on the matter. The short version is this: AT&T's network couldn't handle the traffic, so it was appropriate to restrict it until some time in the future (the LTE rollout) until it could. The issue wasn't that AT&T was restricting FaceTime in favor of its own video-calling service (it didn't have one), but it was instead an issue of "bandwidth management".

When Apple released FaceTime, they themselves restricted it's use to WiFi, preventing its use on cell phone networks. That's because Apple recognized mobile networks couldn't handle it.

When Apple flipped the switch and allowed it's use on mobile networks, because mobile networks had gotten faster, they clearly said "carrier restrictions may apply". In other words, it said "carriers may restrict FaceTime with our blessing if they can't handle the load".

When Tim Wu wrote his paper defining "NetNeutrality" in 2003, he anticipated just this scenario. He wrote:
"The goal of bandwidth management is, at a general level, aligned with network neutrality."
He doesn't give "bandwidth management" a completely free pass. He mentions the issue frequently in his paper with a less favorable description, such as here:
Similarly, while managing bandwidth is a laudable goal, its achievement through restricting certain application types is an unfortunate solution. The result is obviously a selective disadvantage for certain application markets. The less restrictive means is, as above, the technological management of bandwidth. Application-restrictions should, at best, be a stopgap solution to the problem of competing bandwidth demands. 
And that's what AT&T's FaceTime limiting was: an unfortunate stopgap solution until LTE was more fully deployed, which is fully allowed under Tim Wu's principle of NetNeutrality.

So the ACLU's claim above is fully debunked: such things did happen even with NetNeutrality rules in place, and should happen.

Finally, and this is probably the most important part, AT&T didn't block it in the network. Instead, they blocked the app on the phone. If you jailbroke your phone, you could use FaceTime as you wished. Thus, it's not a "network" neutrality issue because no blocking happened in the network.

NetNeutrality vs. Verizon censoring Naral

People keep retweeting this ACLU graphic in support of net neutrality. It's wrong. In this post, I debunk the second item. I debunk other items in other posts [1] [4].


Firstly, it's not a NetNeutrality issue (which applies only to the Internet), but an issue with text-messages. In other words, it's something that will continue to happen even with NetNeutrality rules. People relate this to NetNeutrality as an analogy, not because it actually is such an issue.

Secondly, it's an edge/content issue, not a transit issue. The details in this case is that Verizon provides a program for sending bulk messages to its customers from the edge of the network. Verizon isn't censoring text messages in transit, but from the edge. You can send a text message to your friend on the Verizon network, and it won't be censored. Thus the analogy is incorrect -- the correct analogy would be with content providers like Twitter and Facebook, not ISPs like Comcast.

Like all cell phone vendors, Verizon polices this content, canceling accounts that abuse the system, like spammers. We all agree such censorship is a good thing, and that such censorship of content providers is not remotely a NetNeutrality issue. Content providers do this not because they disapprove of the content of spam such much as the distaste their customers have for spam.

Content providers that are political, rather than neutral to politics is indeed worrisome. It's not a NetNeutrality issue per se, but it is a general "neutrality" issue. We free-speech activists want all content providers (Twitter, Facebook, Verizon mass-texting programs) to be free of political censorship -- though we don't want government to mandate such neutrality.

But even here, Verizon may be off the hook. They appear not be to be censoring one political view over another, but the controversial/unsavory way Naral expresses its views. Presumably, Verizon would be okay with less controversial political content.

In other words, as Verizon expresses it's principles, it wants to block content that drivers away customers, but is otherwise neutral to the content. While this may unfairly target controversial political content, it's at least basically neutral.

So in conclusion, while activists portray this as a NetNeutrality issue, it isn't. It's not even close.

NetNeutrality vs. AT&T censoring Pearl Jam

People keep retweeting this ACLU graphic in response to the FCC's net neutrality decision. In this post, I debunk the first item on the list. In other posts [2] [4] I debunk other items.


First of all, this obviously isn't a Net Neutrality case. The case isn't about AT&T acting as an ISP transiting network traffic. Instead, this was about AT&T being a content provider, through their "Blue Room" subsidiary, whose content traveled across other ISPs. Such things will continue to happen regardless of the most stringent enforcement of NetNeutrality rules, since the FCC doesn't regulate content providers.

Second of all, it wasn't AT&T who censored the traffic. It wasn't their Blue Room subsidiary who censored the traffic. It was a third party company they hired to bleep things like swear words and nipple slips. You are blaming AT&T for a decision by a third party that went against AT&T's wishes. It was an accident, not AT&T policy.

Thirdly, and this is the funny bit, Tim Wu, the guy who defined the term "net neutrality", recently wrote an op-ed claiming that while ISPs shouldn't censor traffic, that content providers should. In other words, he argues that companies AT&T's Blue Room should censor political content.

What activists like ACLU say about NetNeutrality have as little relationship to the truth as Trump's tweets. Both pick "facts" that agree with them only so long as you don't look into them.

The FCC has never defended Net Neutrality

This op-ed by a "net neutrality expert" claims the FCC has always defended "net neutrality". It's garbage.

This wrong on its face. It imagines decades ago that the FCC inshrined some plaque on the wall stating principles that subsequent FCC commissioners have diligently followed. The opposite is true. FCC commissioners are a chaotic bunch, with different interests, influenced (i.e. "lobbied" or "bribed") by different telecommunications/Internet companies. Rather than following a principle, their Internet regulatory actions have been ad hoc and arbitrary -- for decades.

Sure, you can cherry pick some of those regulatory actions as fitting a "net neutrality" narrative, but most actions don't fit that narrative, and there have been gross net neutrality violations that the FCC has ignored.


There are gross violations going on right now that the FCC is allowing. Most egregiously is the "zero-rating" of video traffic on T-Mobile. This is a clear violation of the principles of net neutrality, yet the FCC is allowing it -- despite official "net neutrality" rules in place.

The op-ed above claims that "this [net neutrality] principle was built into the architecture of the Internet". The opposite is true. Traffic discrimination was built into the architecture since the beginning. If you don't believe me, read RFC 791 and the "precedence" field.

More concretely, from the beginning of the Internet as we know it (the 1990s), CDNs (content delivery networks) have provided a fast-lane for customers willing to pay for it. These CDNs are so important that the Internet wouldn't work without them.

I just traced the route of my CNN live stream. It comes from a server 5 miles away, instead of CNN's headquarters 2500 miles away. That server is located inside Comcast's network, because CNN pays Comcast a lot of money to get a fast-lane to Comcast's customers.

The reason these egregious net net violations exist is because it's in the interests of customers. Moving content closer to customers helps. Re-prioritizing (and charging less for) high-bandwidth video over cell networks helps customers.

You might say it's okay that the FCC bends net neutrality rules when it benefits consumers, but that's garbage. Net neutrality claims these principles are sacred and should never be violated. Obviously, that's not true -- they should be violated when it benefits consumers. This means what net neutrality is really saying is that ISPs can't be trusted to allows act to benefit consumers, and therefore need government oversight. Well, if that's your principle, then what you are really saying is that you are a left-winger, not that you believe in net neutrality.

Anyway, my point is that the above op-ed cherry picks a few data points in order to build a narrative that the FCC has always regulated net neutrality. A larger view is that the FCC has never defended this on principle, and is indeed, not defending it right now, even with "net neutrality" rules officially in place.

Your Holiday Cybersecurity Guide

Many of us are visiting parents/relatives this Thanksgiving/Christmas, and will have an opportunity to help our them with cybersecurity issues. I thought I'd write up a quick guide of the most important things.

1. Stop them from reusing passwords

By far the biggest threat to average people is that they re-use the same password across many websites, so that when one website gets hacked, all their accounts get hacked.

To demonstrate the problem, go to haveibeenpwned.com and enter the email address of your relatives. This will show them a number of sites where their password has already been stolen, like LinkedIn, Adobe, etc. That should convince them of the severity of the problem.

They don't need a separate password for every site. You don't care about the majority of website whether you get hacked. Use a common password for all the meaningless sites. You only need unique passwords for important accounts, like email, Facebook, and Twitter.

Write down passwords and store them in a safe place. Sure, it's a common joke that people in offices write passwords on Post-It notes stuck on their monitors or under their keyboards. This is a common security mistake, but that's only because the office environment is widely accessible. Your home isn't, and there's plenty of places to store written passwords securely, such as in a home safe. Even if it's just a desk drawer, such passwords are safe from hackers, because they aren't on a computer.

Write them down, with pen and paper. Don't put them in a MyPasswords.doc, because when a hacker breaks in, they'll easily find that document and easily hack your accounts.

You might help them out with getting a password manager, or two-factor authentication (2FA). Good 2FA like YubiKey will stop a lot of phishing threats. But this is difficult technology to learn, and of course, you'll be on the hook for support issues, such as when they lose the device. Thus, while 2FA is best, I'm only recommending pen-and-paper to store passwords. (AccessNow has a guide, though I think YubiKey/U2F keys for Facebook and GMail are the best).


2. Lock their phone (passcode, fingerprint, faceprint)

You'll lose your phone at some point. It has the keys all all your accounts, like email and so on. With your email, phones thieves can then reset passwords on all your other accounts. Thus, it's incredibly important to lock the phone.

Apple has made this especially easy with fingerprints (and now faceprints), so there's little excuse not to lock the phone.

Note that Apple iPhones are the most secure. I give my mother my old iPhones so that they will have something secure.

My mom demonstrates a problem you'll have with the older generation: she doesn't reliably have her phone with her, and charged. She's the opposite of my dad who religiously slaved to his phone. Even a small change to make her lock her phone means it'll be even more likely she won't have it with her when you need to call her.


3. WiFi (WPA)

Make sure their home WiFi is WPA encrypted. It probably already is, but it's worthwhile checking.

The password should be written down on the same piece of paper as all the other passwords. This is importance. My parents just moved, Comcast installed a WiFi access point for them, and they promptly lost the piece of paper. When I wanted to debug some thing on their network today, they didn't know the password, and couldn't find the paper. Get that password written down in a place it won't get lost!

Discourage them from extra security features like "SSID hiding" and/or "MAC address filtering". They provide no security benefit, and actually make security worse. It means a phone has to advertise the SSID when away from home, and it makes MAC address randomization harder, both of which allows your privacy to be tracked.

If they have a really old home router, you should probably replace it, or at least update the firmware. A lot of old routers have hacks that allow hackers (like me masscaning the Internet) to easily break in.


4. Ad blockers or Brave

Most of the online tricks that will confuse your older parents will come via advertising, such as popups claiming "You are infected with a virus, click here to clean it". Installing an ad blocker in the browser, such as uBlock Origin, stops most all this nonsense.

For example, here's a screenshot of going to the "Speedtest" website to test the speed of my connection (I took this on the plane on the way home for Thanksgiving). Ignore the error (plane's firewall Speedtest) -- but instead look at the advertising banner across the top of the page insisting you need to download a browser extension. This is tricking you into installing malware -- the ad appears as if it's a message from Speedtest, it's not. Speedtest is just selling advertising and has no clue what the banner says. This sort of thing needs to be blocked -- it fools even the technologically competent.

uBlock Origin for Chrome is the one I use. Another option is to replace their browser with Brave, a browser that blocks ads, but at the same time, allows micropayments to support websites you want to support. I use Brave on my iPhone.

A side benefit of ad blockers or Brave is that web surfing becomes much faster, since you aren't downloading all this advertising. The smallest NYtimes story is 15 megabytes in size due to all the advertisements, for example.


5. Cloud Backups

Do backups, in the cloud. It's a good idea in general, especially with the threat of ransomware these days.

In particular, consider your photos. Over time, they will be lost, because people make no effort to keep track of them. All hard drives will eventually crash, deleting your photos. Sure, a few key ones are backed up on Facebook for life, but the rest aren't.

There are so many excellent online backup services out there, like DropBox and Backblaze. Or, you can use the iCloud feature that Apple provides. My favorite is Microsoft's: I already pay $99 a year for Office 365 subscription, and it comes with 1-terabyte of online storage.


6. Separate email accounts

You should have three email accounts: work, personal, and financial.

First, you really need to separate your work account from personal. The IT department is already getting misdirected emails with your spouse/lover that they don't want to see. Any conflict with your work, such as getting fired, gives your private correspondence to their lawyers.

Second, you need a wholly separate account for financial stuff, like Amazon.com, your bank, PayPal, and so on. That prevents confusion with phishing attacks.

Consider this warning today:
If you had split accounts, you could safely ignore this. The USPS would only know your financial email account, which gets no phishing attacks, because it's not widely known. When your receive the phishing attack on your personal email, you ignore it, because you know the USPS doesn't know your personal email account.

Phishing emails are so sophisticated that even experts can't tell the difference. Splitting financial from personal emails makes it so you don't have to tell the difference -- anything financial sent to personal email can safely be ignored.

7. Deauth those apps!

Twitter user @tompcoleman comments that we also need deauth apps.

Social media sites like Facebook, Twitter, and Google encourage you to enable "apps" that work their platforms, often demanding privileges to generate messages on your behalf. The typical scenario is that you use them only once or twice and forget about them.

A lot of them are hostile. For example, my niece's twitter account would occasional send out advertisements, and she didn't know why. It's because a long time ago, she enabled an app with the permission to send tweets for her. I had to sit down and get rid of most of her apps.

Now would be a good time to go through your relatives Facebook, Twitter, and Google/GMail and disable those apps. Don't be a afraid to be ruthless -- they probably weren't using them anyway. Some will still be necessary. For example, Twitter for iPhone shows up in the list of Twitter apps. The URL for editing these apps for Twitter is https://twitter.com/settings/applications. Google link is here (thanks @spextr). I don't know of simple URLs for Facebook, but you should find it somewhere under privacy/security settings.

Update: Here's a more complete guide for a even more social media services.
https://www.permissions.review/


8. Up-to-date software? maybe

I put this last because it can be so much work.

You should install the latest OS (Windows 10, macOS High Sierra), and also turn on automatic patching.

But remember it may not be worth the huge effort involved. I want my parents to be secure -- but no so secure I have to deal with issues.

For example, when my parents updated their HP Print software, the icon on the desktop my mom usually uses to scan things in from the printer disappeared, and needed me to spend 15 minutes with her helping find the new way to access the software.

However, I did get my mom a new netbook to travel with instead of the old WinXP one. I want to get her a Chromebook, but she doesn't want one.

For iOS, you can probably make sure their phones have the latest version without having these usability problems.

Conclusion

You can't solve every problem for your relatives, but these are the more critical ones.

Why Linus is right (as usual)

People are debating this email from Linus Torvalds (maintainer of the Linux kernel). It has strong language, like:
Some security people have scoffed at me when I say that security
problems are primarily "just bugs".
Those security people are f*cking morons.
Because honestly, the kind of security person who doesn't accept that
security problems are primarily just bugs, I don't want to work with.
I thought I'd explain why Linus is right.

Linus has an unwritten manifesto of how the Linux kernel should be maintained. It's not written down in one place, instead we are supposed to reverse engineer it from his scathing emails, where he calls people morons for not understanding it. This is one such scathing email. The rules he's expressing here are:
  • Large changes to the kernel should happen in small iterative steps, each one thoroughly debugged.
  • Minor security concerns aren't major emergencies; they don't allow bypassing the rules more than any other bug/feature.
Last year, some security "hardening" code was added to the kernel to prevent a class of buffer-overflow/out-of-bounds issues. This code didn't address any particular 0day vulnerability, but was designed to prevent a class of future potential exploits from being exploited. This is reasonable.

This code had bugs, but that's no sin. All code has bugs.

The sin, from Linus's point of view, is that when an overflow/out-of-bounds access was detected, the code would kill the user-mode process or kernel. Linus thinks it should have only generated warnings, and let the offending code continue to run.

Of course, that would in theory make the change of little benefit, because it would no longer prevent 0days from being exploited.

But warnings would only be temporary, the first step. There's likely to be be bugs in the large code change, and it would probably uncover bugs in other code. While bounds-checking is a security issue, its first implementation will always find existing code having latent bounds bugs. Or, it'll have "false-positives" triggering on things that aren't actually the flaws its looking for. Killing things made these bugs worse, causing catastrophic failures in the latest kernel that didn't exist before. Warnings, however, would have equally highlighted the bugs, but without causing catastrophic failures. My car runs multiple copies of Linux -- such catastrophic failures would risk my life.

Only after a year, when the bugs have been fixed, would the default behavior of the code be changed to kill buggy code, thus preventing exploitation.

In other words, large changes to the kernel should happen in small, manageable steps. This hardening hasn't existed for 25 years of the Linux kernel, so there's no emergency requiring it be added immediately rather than conservatively, no reason to bypass Linus's development processes. There's no reason it couldn't have been warnings for a year while working out problems, followed by killing buggy code later.

Linus was correct here. No vuln has appeared in the last year that this code would've stopped, so the fact that it killed processes/kernels rather than generated warnings was unnecessary. Conversely, because it killed things, bugs in the kernel code were costly, and required emergency patches.

Despite his unreasonable tone, Linus is a hugely reasonable person. He's not trying to stop changes to the kernel. He's not trying to stop security improvements. He's not even trying to stop processes from getting killed That's not why people are moronic. Instead, they are moronic for not understanding that large changes need to made conservatively, and security issues are no more important than any other feature/bug.




Update: Also, since most security people aren't developers, they are also a bit clueless how things actually work. Bounds-checking, which they define as purely a security feature to stop buffer-overflows is actually overwhelmingly a debugging feature. When you turn on bounds-checking for the first time, it'll trigger on a lot of latent bugs in the code -- things that never caused a problem in the past (like reading past ends of buffers) but cause trouble now. Developers know this, security "experts" tend not to. These kernel changes were made by security people who failed to understand this, who failed to realize that their changes would uncover lots of bugs in existing code, and that killing buggy code was hugely inappropriate.

Update: Another flaw developers are intimately familiar with is how "hardening" code can cause false-positives, triggering on non-buggy code. A good example is where the BIND9 code crashed on an improper assert(). This hardening code designed to prevent exploitation made things worse by triggering on valid input/code.

Update: No, it's probably not okay to call people "morons" as Linus does. They may be wrong, but they usually are reasonable people. On the other hand, security people tend to be sanctimonious bastards with rigid thinking, so after he has dealt with that minority, I can see why Linus treats all security people that way.

How to read newspapers

News articles don't contain the information you think. Instead, they are written according to a formula, and that formula is as much about distorting/hiding information as it is about revealing it.

A good example is the following. I claimed hate-crimes aren't increasing. The tweet below tries to disprove me, by citing a news article that claims the opposite:




But the data behind this article tells a very different story than the words.

Every November, the FBI releases its hate-crime statistics for the previous year. They've been doing this every year for a long time. When they do so, various news organizations grab the data and write a quick story around it.

By "story" I mean a story. Raw numbers don't interest people, so the writer instead has to wrap it in a narrative that does interest people. That's what the writer has done in the above story, leading with the fact that hate crimes have increased.

But is this increase meaningful? What do the numbers actually say?

To answer this, I went to the FBI's website, the source of this data, and grabbed the numbers for the last 20 years, and graphed them in Excel, producing the following graph:


As you can see, there is no significant rise in hate-crimes. Indeed, the latest numbers are about 20% below the average for the last two decades, despite a tiny increase in the last couple years. Statistically/scientifically, there is no change, but you'll never read that in a news article, because it's boring and readers won't pay attention. You'll only get a "news story" that weaves a narrative that interests the reader.

So back to the original tweet exchange. The person used the news story to disprove my claim, but going to the underlying data, it only supports my claim that the hate-crimes are going down, not up -- the small increases of the past couple years are insignificant to the larger decreases of the last two decades.

So that's the point of this post: news stories are deceptive. You have to double-check the data they are based upon, and pay less attention to the narrative they weave, and even less attention to the title designed to grab your attention.


Anyway, as a side-note, I'd like to apologize for being human. The snark/sarcasm of the tweet above gives me extra pleasure in proving them wrong :).

Some notes about the Kaspersky affair

I thought I'd write up some notes about Kaspersky, the Russian anti-virus vendor that many believe has ties to Russian intelligence.

There's two angles to this story. One is whether the accusations are true. The second is the poor way the press has handled the story, with mainstream outlets like the New York Times more intent on pushing government propaganda than informing us what's going on.


The press

Before we address Kaspersky, we need to talk about how the press covers this.

The mainstream media's stories have been pure government propaganda, like this one from the New York Times. It garbles the facts of what happened, and relies primarily on anonymous government sources that cannot be held accountable. It's so messed up that we can't easily challenge it because we aren't even sure exactly what it's claiming.

The Society of Professional Journalists have a name for this abuse of anonymous sources, the "Washington Game". Journalists can identify this as bad journalism, but the big newspapers like The New York Times continues to do it anyway, because how dare anybody criticize them?

For all that I hate the anti-American bias of The Intercept, at least they've had stories that de-garble what's going on, that explain things so that we can challenge them.


Our Government

Our government can't tell us everything, of course. But at the same time, they need to tell us something, to at least being clear what their accusations are. These vague insinuations through the media hurt their credibility, not help it. The obvious craptitude is making us in the cybersecurity community come to Kaspersky's defense, which is not the government's aim at all.

There are lots of issues involved here, but let's consider the major one insinuated by the NYTimes story, that Kaspersky was getting "data" files along with copies of suspected malware. This is troublesome if true.

But, as Kaspersky claims today, it's because they had detected malware within a zip file, and uploaded the entire zip -- including the data files within the zip.

This is reasonable. This is indeed how anti-virus generally works. It completely defeats the NYTimes insinuations.

This isn't to say Kaspersky is telling the truth, of course, but that's not the point. The point is that we are getting vague propaganda from the government further garbled by the press, making Kaspersky's clear defense the credible party in the affair.

It's certainly possible for Kaspersky to write signatures to look for strings like "TS//SI/OC/REL TO USA" that appear in secret US documents, then upload them to Russia. If that's what our government believes is happening, they need to come out and be explicit about it. They can easily setup honeypots, in the way described in today's story, to confirm it. However, it seems the government's description of honeypots is that Kaspersky only upload files that were clearly viruses, not data.

Kaspersky

I believe Kaspersky is guilty, that the company and Eugene himself, works directly with Russian intelligence.

That's because on a personal basis, people in government have given me specific, credible stories -- the sort of thing they should be making public. And these stories are wholly unrelated to stories that have been made public so far.

You shouldn't believe me, of course, because I won't go into details you can challenge. I'm not trying to convince you, I'm just disclosing my point of view.

But there are some public reasons to doubt Kaspersky. For example, when trying to sell to our government, they've claimed they can help us against terrorists. The translation of this is that they could help our intelligence services. Well, if they are willing to help our intelligence services against customers who are terrorists, then why wouldn't they likewise help Russian intelligence services against their adversaries?

Then there is how Russia works. It's a violent country. Most of the people mentioned in that "Steele Dossier" have died. In the hacker community, hackers are often coerced to help the government. Many have simply gone missing.

Being rich doesn't make Kaspersky immune from this -- it makes him more of a target. Russian intelligence knows he's getting all sorts of good intelligence, such as malware written by foreign intelligence services. It's unbelievable they wouldn't put the screws on him to get this sort of thing.

Russia is our adversary. It'd be foolish of our government to buy anti-virus from Russian companies. Likewise, the Russian government won't buy such products from American companies.

Conclusion

I have enormous disrespect for mainstream outlets like The New York Times and the way they've handled the story. It makes me want to come to Kaspersky's defense.

I have enormous respect for Kaspersky technology. They do good work.

But I hear stories. I don't think our government should be trusting Kaspersky at all. For that matter, our government shouldn't trust any cybersecurity products from Russia, China, Iran, etc.

Some notes on the KRACK attack

This is my interpretation of the KRACK attacks paper that describes a way of decrypting encrypted WiFi traffic with an active attack.

tl;dr: Wow. Everyone needs to be afraid. (Well, worried -- not panicked.) It means in practice, attackers can decrypt a lot of wifi traffic, with varying levels of difficulty depending on your precise network setup. My post last July about the DEF CON network being safe was in error.

Details

This is not a crypto bug but a protocol bug (a pretty obvious and trivial protocol bug).

When a client connects to the network, the access-point will at some point send a random "key" data to use for encryption. Because this packet may be lost in transmission, it can be repeated many times.

What the hacker does is just repeatedly sends this packet, potentially hours later. Each time it does so, it resets the "keystream" back to the starting conditions. The obvious patch that device vendors will make is to only accept the first such packet it receives, ignore all the duplicates.

At this point, the protocol bug becomes a crypto bug. We know how to break crypto when we have two keystreams from the same starting position. It's not always reliable, but reliable enough that people need to be afraid.

Android, though, is the biggest danger. Rather than simply replaying the packet, a packet with key data of all zeroes can be sent. This allows attackers to setup a fake WiFi access-point and man-in-the-middle all traffic.

In a related case, the access-point/base-station can sometimes also be attacked, affecting the stream sent to the client.

Not only is sniffing possible, but in some limited cases, injection. This allows the traditional attack of adding bad code to the end of HTML pages in order to trick users into installing a virus.

This is an active attack, not a passive attack, so in theory, it's detectable.

Who is vulnerable?

Everyone, pretty much.

The hacker only needs to be within range of your WiFi. Your neighbor's teenage kid is going to be downloading and running the tool in order to eavesdrop on your packets.

The hacker doesn't need to be logged into your network.

It affects all WPA1/WPA2, the personal one with passwords that we use in home, and the enterprise version with certificates we use in enterprises.

It can't defeat SSL/TLS or VPNs. Thus, if you feel your laptop is safe surfing the public WiFi at airports, then your laptop is still safe from this attack. With Android, it does allow running tools like sslstrip, which can fool many users.

Your home network is vulnerable. Many devices will be using SSL/TLS, so are fine, like your Amazon echo, which you can continue to use without worrying about this attack. Other devices, like your Phillips lightbulbs, may not be so protected.

How can I defend myself?

Patch.

More to the point, measure your current vendors by how long it takes them to patch. Throw away gear by those vendors that took a long time to patch and replace it with vendors that took a short time.

High-end access-points that contains "WIPS" (WiFi Intrusion Prevention Systems) features should be able to detect this and block vulnerable clients from connecting to the network (once the vendor upgrades the systems, of course). Even low-end access-points, like the $30 ones you get for home, can easily be updated to prevent packet sequence numbers from going back to the start (i.e. from the keystream resetting back to the start).

At some point, you'll need to run the attack against yourself, to make sure all your devices are secure. Since you'll be constantly allowing random phones to connect to your network, you'll need to check their vulnerability status before connecting them. You'll need to continue doing this for several years.

Of course, if you are using SSL/TLS for everything, then your danger is mitigated. This is yet another reason why you should be using SSL/TLS for internal communications.

Most security vendors will add things to their products/services to defend you. While valuable in some cases, it's not a defense. The defense is patching the devices you know about, and preventing vulnerable devices from attaching to your network.

If I remember correctly, DEF CON uses Aruba. Aruba contains WIPS functionality, which means by the time DEF CON roles around again next year, they should have the feature to deny vulnerable devices from connecting, and specifically to detect an attack in progress and prevent further communication.

However, for an attacker near an Android device using a low-powered WiFi, it's likely they will be able to conduct man-in-the-middle without any WIPS preventing them.


"Responsible encryption" fallacies

Deputy Attorney General Rod Rosenstein gave a speech recently calling for "Responsible Encryption" (aka. "Crypto Backdoors"). It's full of dangerous ideas that need to be debunked.

The importance of law enforcement

The first third of the speech talks about the importance of law enforcement, as if it's the only thing standing between us and chaos. It cites the 2016 Mirai attacks as an example of the chaos that will only get worse without stricter law enforcement.

But the Mira case demonstrated the opposite, how law enforcement is not needed. They made no arrests in the case. A year later, they still haven't a clue who did it.

Conversely, we technologists have fixed the major infrastructure issues. Specifically, those affected by the DNS outage have moved to multiple DNS providers, including a high-capacity DNS provider like Google and Amazon who can handle such large attacks easily.

In other words, we the people fixed the major Mirai problem, and law-enforcement didn't.

Moreover, instead being a solution to cyber threats, law enforcement has become a threat itself. The DNC didn't have the FBI investigate the attacks from Russia likely because they didn't want the FBI reading all their files, finding wrongdoing by the DNC. It's not that they did anything actually wrong, but it's more like that famous quote from Richelieu "Give me six words written by the most honest of men and I'll find something to hang him by". Give all your internal emails over to the FBI and I'm certain they'll find something to hang you by, if they want.

Or consider the case of Andrew Auernheimer. He found AT&T's website made public user accounts of the first iPad, so he copied some down and posted them to a news site. AT&T had denied the problem, so making the problem public was the only way to force them to fix it. Such access to the website was legal, because AT&T had made the data public. However, prosecutors disagreed. In order to protect the powerful, they twisted and perverted the law to put Auernheimer in jail.

It's not that law enforcement is bad, it's that it's not the unalloyed good Rosenstein imagines. When law enforcement becomes the thing Rosenstein describes, it means we live in a police state.

Where law enforcement can't go

Rosenstein repeats the frequent claim in the encryption debate:
Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection
Of course our society has places "impervious to detection", protected by both legal and natural barriers.

An example of a legal barrier is how spouses can't be forced to testify against each other. This barrier is impervious.

A better example, though, is how so much of government, intelligence, the military, and law enforcement itself is impervious. If prosecutors could gather evidence everywhere, then why isn't Rosenstein prosecuting those guilty of CIA torture?

Oh, you say, government is a special exception. If that were the case, then why did Rosenstein dedicate a precious third of his speech discussing the "rule of law" and how it applies to everyone, "protecting people from abuse by the government". It obviously doesn't, there's one rule of government and a different rule for the people, and the rule for government means there's lots of places law enforcement can't go to gather evidence.

Likewise, the crypto backdoor Rosenstein is demanding for citizens doesn't apply to the President, Congress, the NSA, the Army, or Rosenstein himself.

Then there are the natural barriers. The police can't read your mind. They can only get the evidence that is there, like partial fingerprints, which are far less reliable than full fingerprints. They can't go backwards in time.

I mention this because encryption is a natural barrier. It's their job to overcome this barrier if they can, to crack crypto and so forth. It's not our job to do it for them.

It's like the camera that increasingly comes with TVs for video conferencing, or the microphone on Alexa-style devices that are always recording. This suddenly creates evidence that the police want our help in gathering, such as having the camera turned on all the time, recording to disk, in case the police later gets a warrant, to peer backward in time what happened in our living rooms. The "nothing is impervious" argument applies here as well. And it's equally bogus here. By not helping police by not recording our activities, we aren't somehow breaking some long standing tradit

And this is the scary part. It's not that we are breaking some ancient tradition that there's no place the police can't go (with a warrant). Instead, crypto backdoors breaking the tradition that never before have I been forced to help them eavesdrop on me, even before I'm a suspect, even before any crime has been committed. Sure, laws like CALEA force the phone companies to help the police against wrongdoers -- but here Rosenstein is insisting I help the police against myself.

Balance between privacy and public safety

Rosenstein repeats the frequent claim that encryption upsets the balance between privacy/safety:
Warrant-proof encryption defeats the constitutional balance by elevating privacy above public safety.
This is laughable, because technology has swung the balance alarmingly in favor of law enforcement. Far from "Going Dark" as his side claims, the problem we are confronted with is "Going Light", where the police state monitors our every action.

You are surrounded by recording devices. If you walk down the street in town, outdoor surveillance cameras feed police facial recognition systems. If you drive, automated license plate readers can track your route. If you make a phone call or use a credit card, the police get a record of the transaction. If you stay in a hotel, they demand your ID, for law enforcement purposes.

And that's their stuff, which is nothing compared to your stuff. You are never far from a recording device you own, such as your mobile phone, TV, Alexa/Siri/OkGoogle device, laptop. Modern cars from the last few years increasingly have always-on cell connections and data recorders that record your every action (and location).

Even if you hike out into the country, when you get back, the FBI can subpoena your GPS device to track down your hidden weapon's cache, or grab the photos from your camera.

And this is all offline. So much of what we do is now online. Of the photographs you own, fewer than 1% are printed out, the rest are on your computer or backed up to the cloud.

Your phone is also a GPS recorder of your exact position all the time, which if the government wins the Carpenter case, they police can grab without a warrant. Tagging all citizens with a recording device of their position is not "balance" but the premise for a novel more dystopic than 1984.

If suspected of a crime, which would you rather the police searched? Your person, houses, papers, and physical effects? Or your mobile phone, computer, email, and online/cloud accounts?

The balance of privacy and safety has swung so far in favor of law enforcement that rather than debating whether they should have crypto backdoors, we should be debating how to add more privacy protections.

"But it's not conclusive"

Rosenstein defends the "going light" ("Golden Age of Surveillance") by pointing out it's not always enough for conviction. Nothing gives a conviction better than a person's own words admitting to the crime that were captured by surveillance. This other data, while copious, often fails to convince a jury beyond a reasonable doubt.

This is nonsense. Police got along well enough before the digital age, before such widespread messaging. They solved terrorist and child abduction cases just fine in the 1980s. Sure, somebody's GPS location isn't by itself enough -- until you go there and find all the buried bodies, which leads to a conviction. "Going dark" imagines that somehow, the evidence they've been gathering for centuries is going away. It isn't. It's still here, and matches up with even more digital evidence.

Conversely, a person's own words are not as conclusive as you think. There's always missing context. We quickly get back to the Richelieu "six words" problem, where captured communications are twisted to convict people, with defense lawyers trying to untwist them.

Rosenstein's claim may be true, that a lot of criminals will go free because the other electronic data isn't convincing enough. But I'd need to see that claim backed up with hard studies, not thrown out for emotional impact.

Terrorists and child molesters

You can always tell the lack of seriousness of law enforcement when they bring up terrorists and child molesters.

To be fair, sometimes we do need to talk about terrorists. There are things unique to terrorism where me may need to give government explicit powers to address those unique concerns. For example, the NSA buys mobile phone 0day exploits in order to hack terrorist leaders in tribal areas. This is a good thing.

But when terrorists use encryption the same way everyone else does, then it's not a unique reason to sacrifice our freedoms to give the police extra powers. Either it's a good idea for all crimes or no crimes -- there's nothing particular about terrorism that makes it an exceptional crime. Dead people are dead. Any rational view of the problem relegates terrorism to be a minor problem. More citizens have died since September 8, 2001 from their own furniture than from terrorism. According to studies, the hot water from the tap is more of a threat to you than terrorists.

Yes, government should do what they can to protect us from terrorists, but no, it's not so bad of a threat that requires the imposition of a military/police state. When people use terrorism to justify their actions, it's because they trying to form a military/police state.

A similar argument works with child porn. Here's the thing: the pervs aren't exchanging child porn using the services Rosenstein wants to backdoor, like Apple's Facetime or Facebook's WhatsApp. Instead, they are exchanging child porn using custom services they build themselves.

Again, I'm (mostly) on the side of the FBI. I support their idea of buying 0day exploits in order to hack the web browsers of visitors to the secret "PlayPen" site. This is something that's narrow to this problem and doesn't endanger the innocent. On the other hand, their calls for crypto backdoors endangers the innocent while doing effectively nothing to address child porn.

Terrorists and child molesters are a clichéd, non-serious excuse to appeal to our emotions to give up our rights. We should not give in to such emotions.

Definition of "backdoor"

Rosenstein claims that we shouldn't call backdoors "backdoors":
No one calls any of those functions [like key recovery] a “back door.”  In fact, those capabilities are marketed and sought out by many users.
He's partly right in that we rarely refer to PGP's key escrow feature as a "backdoor".

But that's because the term "backdoor" refers less to how it's done and more to who is doing it. If I set up a recovery password with Apple, I'm the one doing it to myself, so we don't call it a backdoor. If it's the police, spies, hackers, or criminals, then we call it a "backdoor" -- even it's identical technology.

Wikipedia uses the key escrow feature of the 1990s Clipper Chip as a prime example of what everyone means by "backdoor". By "no one", Rosenstein is including Wikipedia, which is obviously incorrect.

Though in truth, it's not going to be the same technology. The needs of law enforcement are different than my personal key escrow/backup needs. In particular, there are unsolvable problems, such as a backdoor that works for the "legitimate" law enforcement in the United States but not for the "illegitimate" police states like Russia and China.

I feel for Rosenstein, because the term "backdoor" does have a pejorative connotation, which can be considered unfair. But that's like saying the word "murder" is a pejorative term for killing people, or "torture" is a pejorative term for torture. The bad connotation exists because we don't like government surveillance. I mean, honestly calling this feature "government surveillance feature" is likewise pejorative, and likewise exactly what it is that we are talking about.

Providers

Rosenstein focuses his arguments on "providers", like Snapchat or Apple. But this isn't the question.

The question is whether a "provider" like Telegram, a Russian company beyond US law, provides this feature. Or, by extension, whether individuals should be free to install whatever software they want, regardless of provider.

Telegram is a Russian company that provides end-to-end encryption. Anybody can download their software in order to communicate so that American law enforcement can't eavesdrop. They aren't going to put in a backdoor for the U.S. If we succeed in putting backdoors in Apple and WhatsApp, all this means is that criminals are going to install Telegram.

If the, for some reason, the US is able to convince all such providers (including Telegram) to install a backdoor, then it still doesn't solve the problem, as uses can just build their own end-to-end encryption app that has no provider. It's like email: some use the major providers like GMail, others setup their own email server.

Ultimately, this means that any law mandating "crypto backdoors" is going to target users not providers. Rosenstein tries to make a comparison with what plain-old telephone companies have to do under old laws like CALEA, but that's not what's happening here. Instead, for such rules to have any effect, they have to punish users for what they install, not providers.

This continues the argument I made above. Government backdoors is not something that forces Internet services to eavesdrop on us -- it forces us to help the government spy on ourselves.

Rosenstein tries to address this by pointing out that it's still a win if major providers like Apple and Facetime are forced to add backdoors, because they are the most popular, and some terrorists/criminals won't move to alternate platforms. This is false. People with good intentions, who are unfairly targeted by a police state, the ones where police abuse is rampant, are the ones who use the backdoored products. Those with bad intentions, who know they are guilty, will move to the safe products. Indeed, Telegram is already popular among terrorists because they believe American services are already all backdoored. 

Rosenstein is essentially demanding the innocent get backdoored while the guilty don't. This seems backwards. This is backwards.

Apple is morally weak

The reason I'm writing this post is because Rosenstein makes a few claims that cannot be ignored. One of them is how he describes Apple's response to government insistence on weakening encryption doing the opposite, strengthening encryption. He reasons this happens because:
Of course they [Apple] do. They are in the business of selling products and making money. 
We [the DoJ] use a different measure of success. We are in the business of preventing crime and saving lives. 
He swells in importance. His condescending tone ennobles himself while debasing others. But this isn't how things work. He's not some white knight above the peasantry, protecting us. He's a beat cop, a civil servant, who serves us.

A better phrasing would have been:
They are in the business of giving customers what they want.
We are in the business of giving voters what they want.
Both sides are doing the same, giving people what they want. Yes, voters want safety, but they also want privacy. Rosenstein imagines that he's free to ignore our demands for privacy as long has he's fulfilling his duty to protect us. He has explicitly rejected what people want, "we use a different measure of success". He imagines it's his job to tell us where the balance between privacy and safety lies. That's not his job, that's our job. We, the people (and our representatives), make that decision, and it's his job is to do what he's told. His measure of success is how well he fulfills our wishes, not how well he satisfies his imagined criteria.

That's why those of us on this side of the debate doubt the good intentions of those like Rosenstein. He criticizes Apple for wanting to protect our rights/freedoms, and declare they measure success differently.

They are willing to be vile

Rosenstein makes this argument:
Companies are willing to make accommodations when required by the government. Recent media reports suggest that a major American technology company developed a tool to suppress online posts in certain geographic areas in order to embrace a foreign government’s censorship policies. 
Let me translate this for you:
Companies are willing to acquiesce to vile requests made by police-states. Therefore, they should acquiesce to our vile police-state requests.
It's Rosenstein who is admitting here is that his requests are those of a police-state.

Constitutional Rights

Rosenstein says:
There is no constitutional right to sell warrant-proof encryption.
Maybe. It's something the courts will have to decide. There are many 1st, 2nd, 3rd, 4th, and 5th Amendment issues here.

The reason we have the Bill of Rights is because of the abuses of the British Government. For example, they quartered troops in our homes, as a way of punishing us, and as a way of forcing us to help in our own oppression. The troops weren't there to defend us against the French, but to defend us against ourselves, to shoot us if we got out of line.

And that's what crypto backdoors do. We are forced to be agents of our own oppression. The principles enumerated by Rosenstein apply to a wide range of even additional surveillance. With little change to his speech, it can equally argue why the constant TV video surveillance from 1984 should be made law.

Let's go back and look at Apple. It is not some base company exploiting consumers for profit. Apple doesn't have guns, they cannot make people buy their product. If Apple doesn't provide customers what they want, then customers vote with their feet, and go buy an Android phone. Apple isn't providing encryption/security in order to make a profit -- it's giving customers what they want in order to stay in business.

Conversely, if we citizens don't like what the government does, tough luck, they've got the guns to enforce their edicts. We can't easily vote with our feet and walk to another country. A "democracy" is far less democratic than capitalism. Apple is a minority, selling phones to 45% of the population, and that's fine, the minority get the phones they want. In a Democracy, where citizens vote on the issue, those 45% are screwed, as the 55% impose their will unwanted onto the remainder.

That's why we have the Bill of Rights, to protect the 49% against abuse by the 51%. Regardless whether the Supreme Court agrees the current Constitution, it is the sort right that might exist regardless of what the Constitution says. 

Obliged to speak the truth

Here is the another part of his speech that I feel cannot be ignored. We have to discuss this:
Those of us who swear to protect the rule of law have a different motivation.  We are obliged to speak the truth.
The truth is that “going dark” threatens to disable law enforcement and enable criminals and terrorists to operate with impunity.
This is not true. Sure, he's obliged to say the absolute truth, in court. He's also obliged to be truthful in general about facts in his personal life, such as not lying on his tax return (the sort of thing that can get lawyers disbarred).

But he's not obliged to tell his spouse his honest opinion whether that new outfit makes them look fat. Likewise, Rosenstein knows his opinion on public policy doesn't fall into this category. He can say with impunity that either global warming doesn't exist, or that it'll cause a biblical deluge within 5 years. Both are factually untrue, but it's not going to get him fired.

And this particular claim is also exaggerated bunk. While everyone agrees encryption makes law enforcement's job harder than with backdoors, nobody honestly believes it can "disable" law enforcement. While everyone agrees that encryption helps terrorists, nobody believes it can enable them to act with "impunity".

I feel bad here. It's a terrible thing to question your opponent's character this way. But Rosenstein made this unavoidable when he clearly, with no ambiguity, put his integrity as Deputy Attorney General on the line behind the statement that "going dark threatens to disable law enforcement and enable criminals and terrorists to operate with impunity". I feel it's a bald face lie, but you don't need to take my word for it. Read his own words yourself and judge his integrity.

Conclusion

Rosenstein's speech includes repeated references to ideas like "oath", "honor", and "duty". It reminds me of Col. Jessup's speech in the movie "A Few Good Men".

If you'll recall, it was rousing speech, "you want me on that wall" and "you use words like honor as a punchline". Of course, since he was violating his oath and sending two privates to death row in order to avoid being held accountable, it was Jessup himself who was crapping on the concepts of "honor", "oath", and "duty".

And so is Rosenstein. He imagines himself on that wall, doing albeit terrible things, justified by his duty to protect citizens. He imagines that it's he who is honorable, while the rest of us not, even has he utters bald faced lies to further his own power and authority.

We activists oppose crypto backdoors not because we lack honor, or because we are criminals, or because we support terrorists and child molesters. It's because we value privacy and government officials who get corrupted by power. It's not that we fear Trump becoming a dictator, it's that we fear bureaucrats at Rosenstein's level becoming drunk on authority -- which Rosenstein demonstrably has. His speech is a long train of corrupt ideas pursuing the same object of despotism -- a despotism we oppose.

In other words, we oppose crypto backdoors because it's not a tool of law enforcement, but a tool of despotism.

Microcell through a mobile hotspot

I accidentally acquired a tree farm 20 minutes outside of town. For utilities, it gets electricity and basic phone. It doesn't get water, sewer, cable, or DSL (i.e. no Internet). Also, it doesn't really get cell phone service. While you can get SMS messages up there, you usually can't get a call connected, or hold a conversation if it does.

We have found a solution -- an evil solution. We connect an AT&T "Microcell", which provides home cell phone service through your Internet connection, to an AT&T Mobile Hotspot, which provides an Internet connection through your cell phone service.


Now, you may be laughing at this, because it's a circular connection. It's like trying to make a sailboat go by blowing on the sails, or lifting up a barrel to lighten the load in the boat.

But it actually works.

Since we get some, but not enough, cellular signal, we setup a mast 20 feet high with a directional antenna pointed to the cell tower 7.5 miles to the southwest, connected to a signal amplifier. It's still an imperfect solution, as we are still getting terrain distortions in the signal, but it provides a good enough signal-to-noise ratio to get a solid connection.

We then connect that directional antenna directly to a high-end Mobile Hotspot. This gives us a solid 2mbps connection with a latency under 30milliseconds. This is far lower than the 50mbps you can get right next to a 4G/LTE tower, but it's still pretty good for our purposes.

We then connect the AT&T Microcell to the Mobile Hotspot, via WiFi.

To avoid the circular connection, we lock the frequencies for the Mobile Hotspot to 4G/LTE, and to 3G for the Microcell. This prevents the Mobile Hotspot locking onto the strong 3G signal from the Microcell. It also prevents the two from causing noise to the other.

This works really great. We now get a strong cell signal on our phones even 400 feet from the house through some trees. We can be all over the property, out in the lake, down by the garden, and so on, and have our phones work as normal. It's only AT&T, but that's what the whole family uses.

You might be asking why we didn't just use a normal signal amplifier, like they use on corporate campus. It boosts all the analog frequencies, making any cell phone service works.

We've tried this, and it works a bit, allowing cell phones to work inside the house pretty well. But they don't work outside the house, which is where we spend a lot of time. In addition, while our newer phones work, my sister's iPhone 5 doesn't. We have no idea what's going on. Presumably, we could hire professional installers and stuff to get everything working, but nobody would quote us a price lower than $25,000 to even come look at the property.

Another possible solution is satellite Internet. There are two satellites in orbit that cover the United States with small "spot beams" delivering high-speed service (25mbps downloads). However, the latency is 500milliseconds, which makes it impractical for low-latency applications like phone calls.

While I know a lot about the technology in theory, I find myself hopelessly clueless in practice. I've been playing with SDR ("software defined radio") to try to figure out exactly where to locate and point the directional antenna, but I'm not sure I've come up with anything useful. In casual tests, it seems rotating the antenna from vertical to horizontal increases the signal-to-noise ratio a bit, which seems counter intuitive, and should not happen. So I'm completely lost.

Anyway, I thought I'd write this up as a blogpost, in case anybody has better suggestion. Or, instead of signals, suggestions to get wired connectivity. Properties a half mile away get DSL, I wish I knew who to talk to at the local phone company to pay them money to extend Internet to our property.

Phone works in all this area now

Browser hacking for 280 character tweets

Twitter has raised the limit to 280 characters for a select number of people. However, they left open a hole, allowing anybody to make large tweets with a little bit of hacking. The hacking skills needed are basic hacking skills, which I thought I'd write up in a blog post.


Specifically, the skills you will exercise are:

  • basic command-line shell
  • basic HTTP requests
  • basic browser DOM editing

The short instructions

The basic instructions were found in tweets like the following:

These instructions are clear to the average hacker, but of course, a bit difficult for those learning hacking, hence this post.

The command-line

The basics of most hacking start with knowledge of the command-line. This is the "Terminal" app under macOS or cmd.exe under Windows. Almost always when you see hacking dramatized in the movies, they are using the command-line.

In the beginning, the command-line is all computers had. To do anything on a computer, you had to type a "command" telling it what to do. What we see as the modern graphical screen is a layer on top of the command-line, one that translates clicks of the mouse into the raw commands.

On most systems, the command-line is known as "bash". This is what you'll find on Linux and macOS. Windows historically has had a different command-line that uses slightly different syntax, though in the last couple years, they've also supported "bash". You'll have to install it first, such as by following these instructions.

You'll see me use command that may not be yet installed on your "bash" command-line, like nc and curl. You'll need to run a command to install them, such as:

sudo apt-get install nc curl

The thing to remember about the command-line is that the mouse doesn't work. You can't click to move the cursor as you normally do in applications. That's because the command-line predates the mouse by decades. Instead, you have to use arrow keys.

I'm not going to spend much effort discussing the command-line, as a complete explanation is beyond the scope of this document. Instead, I'm assuming the reader either already knows it, or will learn-from-example as we go along.

Web requests

The basics of how the web works are really simple. A request to a web server is just a small packet of text, such as the following, which does a search on Google for the search-term "penguin" (presumably, you are interested in knowing more about penguins):

GET /search?q=penguin HTTP/1.0
Host: www.google.com
User-Agent: human

The command we are sending to the server is GET, meaning get a page. We are accessing the URL /search, which on Google's website, is how you do a search. We are then sending the parameter q with the value penguin. We also declare that we are using version 1.0 of the HTTP (hyper-text transfer protocol).

Following the first line there are a number of additional headers. In one header, we declare the Host name that we are accessing. Web servers can contain many different websites, with different names, so this header is usually imporant.

We also add the User-Agent header. The "user-agent" means the "browser" that you use, like Edge, Chrome, Firefox, or Safari. It allows servers to send content optimized for different browsers. Since we are sending web requests without a browser here, we are joking around saying human.

Here's what happens when we use the nc program to send this to a google web server:
The first part is us typing, until we hit the [enter] key to create a blank line. After that point is the response from the Google server. We get back a result code (OK), followed by more headers from the server, and finally the contents of the webpage, which goes on from many screens. (We'll talk about what web pages look like below).

Note that a lot of HTTP headers are optional and really have little influence on what's going on. They are just junk added to web requests. For example, we see Google report a P3P header is some relic of 2002 that nobody uses anymore, as far as I can tell. Indeed, if you follow the URL in the P3P header, Google pretty much says exactly that.

I point this out because the request I show above is a simplified one. In practice, most requests contain a lot more headers, especially Cookie headers. We'll see that later when making requests.

Using cURL instead

Sending the raw HTTP request to the server, and getting raw HTTP/HTML back, is annoying. The better way of doing this is with the tool known as cURL, or plainly, just curl. You may be familiar with the older command-line tools wget. cURL is similar, but more flexible.

To use curl for the experiment above, we'd do something like the following. We are saving the web page to "penguin.html" instead of just spewing it on the screen.
Underneath, cURL builds an HTTP header just like the one we showed above, and sends it to the server, getting the response back.

Web-pages

Now let's talk about web pages. When you look at the web page we got back from Google while searching for "penguin", you'll see that it's intimidatingly complex. I mean, it intimidates me. But it all starts from some basic principles, so we'll look at some simpler examples.

The following is text of a simple web page:

<html>
<body>
<h1>Test</h1>
<p>This is a simple web page</p>
</body>
</html>

This is HTML, "hyper-text markup language". As it's name implies, we "markup" text, such as declaring the first text as a level-1 header (H1), and the following text as a paragraph (P).

In a web browser, this gets rendered as something that looks like the following. Notice how a header is formatted differently from a paragraph. Also notice that web browsers can use local files as well as make remote requests to web servers:
You can right-mouse click on the page and do a "View Source". This will show the raw source behind the web page:
Web pages don't just contain marked-up text. They contain two other important features, style information that dictates how things appear, and script that does all the live things that web pages do, from which we build web apps.

So let's add a little bit of style and scripting to our web page. First, let's view the source we'll be adding:
In our header (H1) field, we've added the attribute to the markup giving this an id of mytitle. In the style section above, we give that element a color of blue, and tell it to align to the center.

Then, in our script section, we've told it that when somebody clicks on the element "mytitle", it should send an "alert" message of "hello".

This is what our web page now looks like, with the center blue title:
When we click on the title, we get a popup alert:
Thus, we see an example of the three components of a webpage: markup, style, and scripting.

Chrome developer tools

Now we go off the deep end. Right-mouse click on "Test" (not normal click, but right-button click, to pull up a menu). Select "Inspect".
You should now get a window that looks something like the following. Chrome splits the screen in half, showing the web page on the left, and it's debug tools on the right.
This looks similar to what "View Source" shows, but it isn't. Instead, it's showing how Chrome interpreted the source HTML. For example, our style/script tags should've been marked up with a head (header) tag. We forgot it, but Chrome adds it in anyway.

What Google is showing us is called the DOM, or document object model. It shows us all the objects that make up a web page, and how they fit together.

For example, it shows us how the style information for #mytitle is created. It first starts with the default style information for an h1 tag, and then how we've changed it with our style specifications.

We can edit the DOM manually. Just double click on things you want to change. For example, in this screen shot, I've changed the style spec from blue to red, and I've changed the header and paragraph test. The original file on disk hasn't changed, but I've changed the DOM in memory.

This is a classic hacking technique. If you don't like things like paywalls, for example, just right-click on the element blocking your view of the text, "Inspect" it, then delete it. (This works for some paywalls).

This edits the markup and style info, but changing the scripting stuff is a bit more complicated. To do that, click on the [Console] tab. This is the scripting console, and allows you to run code directly as part of the webpage. We are going to run code that resets what happens when we click on the title. In this case, we are simply going to change the message to "goodbye".
Now when we click on the title, we indeed get the message:

Again, a common way to get around paywalls is to run some code like that that change which functions will be called.

Putting it all together

Now let's put this all together in order to hack Twitter to allow us (the non-chosen) to tweet 280 characters. Review Dildog's instructions above.

The first step is to get to Chrome Developer Tools. Dildog suggests F12. I suggest right-clicking on the Tweet button (or Reply button, as I use in my example) and doing "Inspect", as I describe above.

You'll now see your screen split in half, with the DOM toward the right, similar to how I describe above. However, Twitter's app is really complex. Well, not really complex, it's all basic stuff when you come right down to it. It's just so much stuff -- it's a large web app with lots of parts. So we have to dive in without understanding everything that's going on.

The Tweet/Reply button we are inspecting is going to look like this in the DOM:


The Tweet/Reply button is currently greyed out because it has the "disabled" attribute. You need to double click on it and remove that attribute. Also, in the class attribute, there is also a "disabled" part. Double-click, then click on that and removed just that disabled as well, without impacting the stuff around it. This should change the button from disabled to enabled. It won't be greyed out, and it'll respond when you click on it.

Now click on it. You'll get an error message, as shown below:

What we've done here is bypass what's known as client-side validation. The script in the web page prevented sending Tweets longer than 140 characters. Our editing of the DOM changed that, allowing us to send a bad request to the server. Bypassing client-side validation this way is the source of a lot of hacking.

But Twitter still does server-side validation as well. They know any client-side validation can be bypassed, and are in on the joke. They tell us hackers "You'll have to be more clever". So let's be more clever.

In order to make longer 280 characters tweets work for select customers, they had to change something on the server-side. The thing they added was adding a "weighted_character_count=true" to the HTTP request. We just need to repeat the request we generated above, adding this parameter.

In theory, we can do this by fiddling with the scripting. The way Dildog describes does it a different way. He copies the request out of the browser, edits it, then send it via the command-line using curl.

We've used the [Elements] and [Console] tabs in Chrome's DevTools. Now we are going to use the [Network] tab. This lists all the requests the web page has made to the server. The twitter app is constantly making requests to refresh the content of the web page. The request we made trying to do a long tweet is called "create", and is red, because it failed.



Google Chrome gives us a number of ways to duplicate the request. The most useful is that it copies it as a full cURL command we can just paste onto the command-line. We don't even need to know cURL, it takes care of everything for us. On Windows, since you have two command-lines, it gives you a choice to use the older Windows cmd.exe, or the newer bash.exe. I use the bash version, since I don't know where to get the Windows command-line version of cURL.exe.


There's a lot of going on here. The first thing to notice is the long xxxxxx strings. That's actually not in the original screenshot. I edited the picture. That's because these are session-cookies. If inserted them into your browser, you'd hijack my Twitter session, and be able to tweet as me (such as making Carlos Danger style tweets). Therefore, I have to remove them from the example.

At the top of the screen is the URL that we are accessing, which is https://twitter.com/i/tweet/create. Much of the rest of the screen uses the cURL -H option to add a header. These are all the HTTP headers that I describe above. Finally, at the bottom, is the --data section, which contains the data bits related to the tweet, especially the tweet itself.

We need to edit either the URL above to read https://twitter.com/i/tweet/create?weighted_character_count=true, or we need to add &weighted_character_count=true to the --data section at the bottom (either works). Remember: mouse doesn't work on command-line, so you have to use the cursor-keys to navigate backwards in the line. Also, since the line is larger than the screen, it's on several visual lines, even though it's all a single line as far as the command-line is concerned.

Now just hit [return] on your keyboard, and the tweet will be sent to the server, which at the moment, works. Presto!

Twitter will either enable or disable the feature for everyone in a few weeks, at which point, this post won't work. But the reason I'm writing this is to demonstrate the basic hacking skills. We manipulate the web pages we receive from servers, and we manipulate what's sent back from our browser back to the server.

Easier: hack the scripting

Instead of messing with the DOM and editing the HTTP request, the better solution would be to change the scripting that does both DOM client-side validation and HTTP request generation. The only reason Dildog above didn't do that is that it's a lot more work trying to find where all this happens.

Others have, though. @Zemnmez did just that, though his technique works for the alternate TweetDeck client (https://tweetdeck.twitter.com) instead of the default client. Go copy his code from here, then paste it into the DevTools scripting [Console]. It'll go in an replace some scripting functions, such like my simpler example above.


The console is showing a stream of error messages, because TweetDeck has bugs, ignore those.

Now you can effortlessly do long tweets as normal, without all the messing around I've spent so much text in this blog post describing.

Now, as I've mentioned this before, you are only editing what's going on in the current web page. If you refresh this page, or close it, everything will be lost. You'll have to re-open the DevTools scripting console and repaste the code. The easier way of doing this is to use the [Sources] tab instead of [Console] and use the "Snippets" feature to save this bit of code in your browser, to make it easier next time.

The even easier way is to use Chrome extensions like TamperMonkey and GreaseMonkey that'll take care of this for you. They'll save the script, and automatically run it when they see you open the TweetDeck webpage again.

An even easier way is to use one of the several Chrome extensions written in the past day specifically designed to bypass the 140 character limit. Since the purpose of this blog post is to show you how to tamper with your browser yourself, rather than help you with Twitter, I won't list them.

Conclusion

Tampering with the web-page the server gives you, and the data you send back, is a basic hacker skill. In truth, there is a lot to this. You have to get comfortable with the command-line, using tools like cURL. You have to learn how HTTP requests work. You have to understand how web pages are built from markup, style, and scripting. You have to be comfortable using Chrome's DevTools for messing around with web page elements, network requests, scripting console, and scripting sources.

So it's rather a lot, actually.

My hope with this page is to show you a practical application of all this, without getting too bogged down in fully explaining how every bit works.