Related to yesterday’s post about the AP article confirming that, in
fact, modern cryptography is pretty good, there’s a
reasonably decent discussion going on at Hacker News in
response, with a mixture of the usual fearmongering / unjustified
pessimism but also some very good information.
This post, by HN user “colordrops”, is particularly worth
discussing, despite falling a bit on the “pessimistic” side of things:
It seems that most people are completely in the dark when it comes
to security, including myself, but there are some principles that
should be unwavering that regularly get ignored again with every new
iteration of “secure” software:
- If there is a weak layer in the stack, from the physical layer to
to UI, then the system is not secure. Even if your messaging app
is secure, your messages are not secure if your OS is not secure
- If the source code is not available for review, the software is
not secure
- If you or someone you trust has not done a full and thorough
review of all components of the stack you are using, the software
is not secure
- Even if the source code is available, the runtime activity must be
audited, as it could download binaries or take unsavory actions or
connections.
- On the same note, if you do not have a mechanism for verifying the
authenticity of the entire stack, the software is not secure.
- If any part of the stack has ever been compromised, including
leaving your device unlocked for five minutes in a public place,
the software is not secure.
I could go on, and I’m FAR from a security expert. People compromise
way too much on security, and make all kinds of wrong assumptions
when some new organization comes out and claims that their software
is the “secure” option. We see this with apps like Telegram and
Signal, where everyone thinks they are secure, but if you really dig
down, most people believe they are secure for the wrong reasons:
- The dev team seems like honest and capable people
- Someone I trust or some famous person said this software is secure
- They have a home page full of buzzwords and crypto jargon
- They threw some code up on github
- I heard they are secure in half a dozen tweets and media channels
First, I have to take serious issue with the author’s use of “secure”
as a sort of absolute. Thinking of “secure” as a sort of binary,
is-it-or-isn’t-it state is only useful in the most academic corners of
cryptography, where we can talk about an algorithm being “secure”
against certain kinds of analysis or attack. It is bordering on
useless when you get into the dirtiness of the real world.
Implementations are not “secure” in the absolute. Implementations may
be secure within a certain threat space, or for a certain set of
needs, but security is always relative to some perceived adversary.
If your adversary has unlimited resources, then no implementation will
ever be secure over a long timescale. (An ‘unlimited resources’
adversary will just build Dyson spheres around a few nearby stars
and use them to power computronium bruteforce machines. Good
thing you don’t really have an unlimited-resources adversary, do
you?)
Security is all about tradeoffs. As you make an implementation more
robust, it becomes more cumbersome to use. Computers have done really
amazing things to make formerly-cumbersome security easier to use,
but this tradeoff still exists and probably will always exist once you
start talking about practical attacks.
The implementation standards for government-level security, e.g. the
handling of classified information by the US DOD and similar, require
electronically shielded rooms and specially vetted equipment to
prevent information leakage at the endpoints. But as the last few
years have demonstrated, these systems — while extremely impressive
and well-constructed — have still leaked information through human
factors compromises. So in that sense, anything that involves a
person is arguably “insecure”. For most applications, there’s no
getting around that.
Beyond that, though, the author does make some good points about users
believing that a program is “secure” for the wrong reasons, including
buzzword-laden webpages, unverified claims in the media, or
endorsement by famous people who do not have a significant reputation
in the IT security communtity at stake. These are all real problems
that have been exploited to push poorly-designed software onto users
who deserve better.
Many modern apps, including not only Telegram and Signal but also
Facebook Messenger in its end-to-end encrypted mode, and various
corporate email systems, are “secure enough” for particular
needs. They’ll almost certainly hide what you’re doing or saying from
your family, friends, nosy neighbors, boss (provided you don’t work
for an intelligence or law enforcement agency), spouse, etc., which is
what I suspect all but a very small fraction of users actually
require. So, for most people, they are functionally secure.
For the very small number of users whose activities are likely to
cause them to be of interest to modern, well-funded, First World
intelligence agenices, essentially no application running on a modern
smartphone is going to be secure enough.
As others on HN point out, modern smartphones are
essentially “black boxes” running vast amounts of closed-source,
unauditable code, including in critical subsystems like the
“baseband”. One anonymous user even alleges that:
The modifications installed by your phone company, etc. are not open
source. The baseband chip’s firmware is not open sourced. I’ve even
heard of DMA being allowed over baseband as part of the Lawful
Intercept Protocol.
There is, naturally, no sourcing on the specific claim about DMA over
the cellular connection, but that would be a pretty neat trick: it
would essentially be one step above remote code execution, and give a
remote attacker access to the memory space of any application running
on the device, perhaps without any sign (such as a typical rootkit or
spyware suite would leave) that the device was tapped. Intriguing.
I am, personally, not really against intelligence agenices having
these sort of capabilities. The problem becomes when they are too
easy or cheap to use. The CIA’s stash of rootkits and zero-days is
unlikely to be deployed except in bona fide (at least, perceived to be
bona fide) national security situations, because of the expense
involved in obtaining those sort of vulnerabilities and the sharp drop
in utility once they’ve been used once. They’re single-shot weapons,
basically. If some were to get their way and manage to equip every
consumer communications device with a mandatory backdoor, though, it
would be only a matter of time before the usage criteria for that
backdoor broadened from national security / terrorism scenarios, to
serious domestic crimes like kidnapping, and then on down the line
until it was being used for run-of-the-mill drug possession cases.
And even if you think (and I will strongly disagree, but it’s out of
scope for this post) that drug possession cases deserve the
availability of those sort of tools, in the process of that
trickling-down of capabilities, it would also doubtless fall into the
hands of unintended third parties: from the cop who wants to see if
their wife or husband are cheating on them, to organized crime, to
Internet trolls and drive-by perverts looking for nude photos. Such
is the lifecycle of security vulnerabilities: it all ends up in the
hands of the “script kiddies” eventually.
Nobody has found a way to break that lifecycle so far: today’s
zero-days are tomorrow’s slightly-gifted-highschool’s tools for spying
on the girl or boy they’re fancying in class. Intentionally creating
large vulnerabilities — which is exactly what a backdoor would be —
just means everyone along the food chain would get a bigger meal as it
became more and more widely available.
The only solution, as I see it, is to keep doing pretty much what
we’ve been doing: keep funding security research to harden devices and
platforms, and keep funding the research on the other side of the
equation (both in the private sector and via the IC) who try to pick
away at it, and hope that the balance remains relatively constant, and
similar to what we currently enjoy: enough security for the average
person to keep their communications private from those they don’t want
to share them with, impressively secure communications for those who
want to put in the effort, but enough capabilities on the
law-enforcement and intelligence side to keep organized crime and
terrorism disrupted in terms of communications.