After reading through some — certainly not all, and admittedly not
thoroughly — of the documents and analysis of the NSA “BULLRUN”
crypto-subversion program, as well as various panicky speculation on
the usual discussion sites, I can’t resist the temptation to make a
few predictions/guesses. At some point in the future I’ll revisit
them and we’ll all get to see whether things are actually better or
worse than I suspect they are.
I’m not claiming any special knowledge or expertise here; I’m just a
dog on the Internet.
Hypothesis 1: NSA hasn’t made any fundamental breakthroughs in cryptanalysis,
such as a method of rapidly factoring large numbers, which render
public-key cryptography suddenly useless.
None of the leaks seem to suggest any heretofore-unknown abilities
that undermine the mathematics that lie at the heart of PK crypto
(trapdoor functions). E.g. a giant quantum computer that can simply
brute-force arbitrarily large keys in short amounts of time. In fact,
the leaks suggest that this capability almost certainly doesn’t
exist, or else all the other messy stuff, like compromising various
implementations, wouldn’t be necessary.
Hypothesis 2: There are a variety of strategies used by NSA/GHCQ for
getting access to encrypted communications, rather that a single
This is a pretty trivial observation. There’s no single “BULLRUN”
vulnerability; instead there was an entire program aimed at
compromising various products to make them easier to break, and the
way this was done varied from case to case. I point
this out only because I suspect that it may get glossed over in public
discussions of the issue in the future, particularly if there are
especially egregious vulnerabilities that were inserted (as seems
Hypothesis 3: Certificate authorities are probably compromised (duh)
This is conjecture on my part, and not drawn directly from any primary
source material. But the widely-accepted certificate authorities that
form the heart of SSL/TLS PKI are simply too big a target for anyone
wanting to monitor communications to ignore. If you have root certs
and access to backbone switches with suitably fast equipment, there’s
no technical reason why you can’t MITM TLS connections all day long.
However, MITM attacks are still active rather than passive, and
probably unfeasible even for the NSA or its contemporaries on a
universal basis. Since they’re detectable by a careful-enough user
(e.g. someone who actually verifies a certificate fingerprint over a
side channel), it’s likely the sort of capability that you keep in
reserve for when it counts.
This really shouldn’t be surprising; if anyone seriously thought,
pre-Snowden, that Verisign et al wouldn’t and hadn’t handed over the
secret keys to their root certs to the NSA, I’d say they were pretty naive.
Hypothesis 4: Offline attacks are facilitated in large part by weak
Some documents allude to a program of recording large amounts of
encrypted Internet traffic for later decryption and analysis. This
rules out conventional MITM attacks, and implies some other method of
breaking commonly-used Internet cryptography.
At least one NSA-related weakness seems to have been the
pseudorandom number generator specified in NIST SP 800-90; it was a
bit hamhanded as these things go because it was discovered, but it’s
important because it shows an interest.
It is possible that certain “improvements” were made to hardware RNGs,
such as those used in VPN hardware and also in many PCs, but the jury
seems to be out right now. But compromising
hardware makes somewhat more sense than software, since it’s much
harder to audit and detect, and it’s also harder to update.
Engineered weaknesses inside [P]RNG hardware used in VPN appliances and other
enterprise gear might be the core of NSA’s offline intercept
capability, the crown jewel of the whole thing. However, it’s
important to keep in mind Hypothesis 2, above.
Hypothesis 5: GCC and other compilers are probably not compromised
It’s possible, both in theory and to some degree in practice, to
compromise software by building flaws into the compiler that’s used to
create it. (The seminal paper on this topic is “Reflections on
Trusting Trust” by Ken Thompson. It’s worth reading.)
Some only-slightly-paranoids have suggested that the NSA and its
sister organizations may have attempted to subvert commonly-used
compilers in order to weaken all cryptographic software produced with
them. I think this is pretty unlikely to have actually been carried
out; it just seems like the risk of discovery would be too high.
Despite the complexity of something like GCC, there are lots of people
looking at it from a variety of organizations, and it would be
difficult to subvert all of them while harder still to insert an
exploit that would have been completely undetected. In comparison, it would be
relatively easy to convince a single company producing ASICs to modify
a proprietary design. Just based on bang-for-buck, I think that’s where the
effort is likely to have been.
Hypothesis 6: The situation is probably not hopeless, from a security
There is a refrain in some circles that the situation is now hopeless,
and that PK-cryptography-based information security is irretrievably
broken and can never be trusted ever again. I do not think that this
is the case.
My guess — and this is really a guess; it’s the thing that I’m hoping
will be true — is that there’s nothing fundamentally wrong with
public key crypto, or even in many carefully-built implementations.
It’s when you start optmizing for cost or speed that you open the door.
So: if you are very, very careful, you will still be
able to build up a reasonably-secure infrastructure using currently
available hardware and software. (‘Reasonably secure’ meaning
resistant to untargeted mass surveillance, not necessarily to a
targeted attack that might include physical bugging: that’s a much
higher bar.) However, some code may need to be changed in order to
eliminate any reliance on possibly-compromised components, such as
hardware RNGs / accelerators that by their nature are difficult to
Large companies that have significant investments in VPN or
TLS-acceleration hardware are probably screwed. Even if the gear is
demonstrably flawed, look for most companies to downplay the risk in
order to avoid having to suddenly replace it.
Time will tell exactly what techniques are still safe and which
aren’t, but my WAG (just for the record, so that there’s something to
give a thumbs-up / thumbs-down on later) is that TLS in
FIPS-compliance mode, on commodity PC hardware but with hardware RNGs
disabled or not present at both ends of the connection, using
throwaway certificates (e.g. no use of conventional PKI like
certificate authorities) validated via a side-channel, will turn out
to be fairly good. But a lot of work will have to be invested in
validating everything to be sure.
Also, my overall guess is that neither the open-source world or the
commercial, closed-source world will come out entirely unscathed, in
terms of reputation for quality. However, the worst vulnerabilities
are likely to have been inserted where there were the least number of
eyes looking for them, which will probably be in hardware or tightly
integrated firmware/software developed by single companies and
distributed in “compiled” (literally compiled or in the form of an
ASIC) form only.
As usual, simpler will turn out to be better, and
generic hardware running widely-available software will be better than
dedicated boxes filled with a lot of proprietary silicon or code.
So we’ll see how close I am to the mark. Interesting times.