Kadin2048's Weblog
2017
Months
JulAug Sep
Oct Nov Dec

RSS

Sat, 25 Mar 2017

It has been widely reported this week that Google has formally announced plans to kill off Google Talk, its original popular IM product which (for most of us) was supplanted by “Hangouts” a few years back.

I still think that Google Talk was the high-water mark for Google’s “over the top” (OTT) chat efforts; it was reliable, standards-based, interoperable with third-party clients and servers, feature-rich, and well integrated with other Google products such as Gmail. You could federate Google Talk with a standard XMPP server and communicate back and forth across domains, or use a third-party desktop client like Adium or Pidgin. Google Talk message logs appeared in Gmail almost like email messages, making them easy to backup (you could retrieve them with IMAP!) or search.

Looking back on those halcyon days, we hardly knew how good we had it.

Everything Google has done since then has made the basic user experience gradually more shitty.

Today, Hangouts works with Adium and Pidgin sometimes, depending on what Google has done to break things lately. XMPP federation with other servers is being disabled, for no good reason that I can tell, in the near future, finally making it the walled garden that Google apparently wants. Integration with other products is inconsistent: to use some Hangouts features, you need to use the primary web interface (hangouts.google.com), but other key features — message search being the biggest one — are missing entirely, and require you to go into Gmail. Gmail! Why the fuck do I need to go into my email client to search my messaging logs? Who knows. That’s just how Google makes you do it. And of course in Gmail, Hangouts logs are no longer stored as emails, they’re some bizarre format where logs are broken up arbitrarily into little chunks (sometimes one log chunk per message), and in some cases there’s no way to get from a message in Gmail’s search results back to a coherent view of the conversation that it occurred in.

In the meantime they added voice, which is sorta neat but nobody I know really uses, and video / screensharing, which is very cool but uses its own web interface and seems suspiciously like a bolt-on addition.

Basically, Hangouts is broken.

But rather than fix it, Google seems determined to screw it up some more, in order to turn it into an “enterprise” messaging system (read: doomed Slack competitor). On the chopping block in the near term is the integration of carrier SMS and MMS into the Hangouts mobile app. I guess because enterprise users don’t use text messages..? Only Google knows why, and they’re not saying anything coherent.

For us poor plebs, they created “Allo”, a WhatsApp clone combining all the downsides of OTT messaging and carrier SMS into one shit sandwich of a product. (Just the downsides of carrier SMS, like requiring a POTS phone number; it doesn’t actually do carrier SMS, of course. That’s a new, separate app.) The sole deal-sweetener was the inclusion of Google Assistant, which could have just as easily been added into Hangouts. But instead they made it an Allo exclusive, ensuring that nobody really got to use it. Bravo.

Here’s the worst part: Hangouts is broken, Google is not going to fix it, and the best alterative for Joe User right now is … drumroll, please … Facebook Messenger.

That’s right, Facebook Messenger. Official platform of your 14-year-old nephew, at least as of five years ago, before he moved on to Snapchat or something else cooler. That’s the competition that Google is basically surrendering to. It’s like losing a footrace to someone too stupid to walk and chew gum at the same time, but only because you decided it’d be fun to saw your own legs off.

In fairness — very, very grudging fairness, because Facebook at this point is about one forked tail away from being Actually The Devil Himself in terms of user-hostility — Facebook Messenger isn’t… all that bad. I can’t believe I just wrote that. I feel dirty.

However, it’s hard to avoid: Facebook’s Messenger is just the better product, or is likely to become the better one soon. Let us count the ways:

  • It has the userbase, because everyone with a Facebook account also has a Messenger account. However it doesn’t require FB membership to use Messenger: you can create a Messenger-only account by validating a phone number (much like WhatsApp or Signal or Allo). So it’s got all of them beat there, and network effects mean that the number of people already using the service is always the most important feature of a messaging service.

  • It allows end-to-end encryption but isn’t wed to it (as Signal is), meaning it can do things that are hard to do in a 100% E2E encrypted architecture, like letting you simultaneously use multiple devices in the course of a day and have all your messages arrive to all of them. All your logs can be searched from any device, too.

  • Speaking of logs, Facebook already has better facilities for searching your past conversations than Hangouts. (The only service that seems to be better is Slack, which is somewhat ironic given that Google apparently wants Hangouts to be its Slack competitor, and Google can’t beat Slack at the one thing that you’d expect Google to actually do well.) Finding a conversation based on a keyword and then being able to read it in context is already far easier from Messenger’s website than from Gmail’s, and of course you can’t search conversations from Hangouts’ main website at all.

  • On mobile, at least on Android, the Hangouts app is better for the moment, but I don’t expect that to stay the same once Google starts forklift-upgrading it to be “enterprisey-er”. And the Messenger app isn’t terrible (unlike the main Facebook app, which is an unstable battery- and data-hogging testament to bad ideas poorly implemented). The recent inclusion of Snapchat-like features nobody really asked for notwithstanding, Messenger does its job and has some occasional nice features, like very low-friction picture and short video messaging. At least on my device, it hasn’t crashed or ANRed in as long as I can remember.

Personally, I’ll probably continue to use Hangouts until the bitter end, because I’m lazy and resistant to change, but I suspect Messenger is where most of my friends are going to end up, and those who don’t want to do use a FB product will largely just end up getting carrier SMS/MMS messages again.

Congrats, Google. You could have owned messaging, but you screwed it up. You could probably still salvage the situation, but nothing I’ve seen from the company indicates that they care to, or are even capable of admitting the extent of their miscues.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

Sun, 12 Mar 2017

Related to yesterday’s post about the AP article confirming that, in fact, modern cryptography is pretty good, there’s a reasonably decent discussion going on at Hacker News in response, with a mixture of the usual fearmongering / unjustified pessimism but also some very good information.

This post, by HN user “colordrops”, is particularly worth discussing, despite falling a bit on the “pessimistic” side of things:

It seems that most people are completely in the dark when it comes to security, including myself, but there are some principles that should be unwavering that regularly get ignored again with every new iteration of “secure” software:

  • If there is a weak layer in the stack, from the physical layer to to UI, then the system is not secure. Even if your messaging app is secure, your messages are not secure if your OS is not secure
  • If the source code is not available for review, the software is not secure
  • If you or someone you trust has not done a full and thorough review of all components of the stack you are using, the software is not secure
  • Even if the source code is available, the runtime activity must be audited, as it could download binaries or take unsavory actions or connections.
  • On the same note, if you do not have a mechanism for verifying the authenticity of the entire stack, the software is not secure.
  • If any part of the stack has ever been compromised, including leaving your device unlocked for five minutes in a public place, the software is not secure.

I could go on, and I’m FAR from a security expert. People compromise way too much on security, and make all kinds of wrong assumptions when some new organization comes out and claims that their software is the “secure” option. We see this with apps like Telegram and Signal, where everyone thinks they are secure, but if you really dig down, most people believe they are secure for the wrong reasons:

  • The dev team seems like honest and capable people
  • Someone I trust or some famous person said this software is secure
  • They have a home page full of buzzwords and crypto jargon
  • They threw some code up on github
  • I heard they are secure in half a dozen tweets and media channels

First, I have to take serious issue with the author’s use of “secure” as a sort of absolute. Thinking of “secure” as a sort of binary, is-it-or-isn’t-it state is only useful in the most academic corners of cryptography, where we can talk about an algorithm being “secure” against certain kinds of analysis or attack. It is bordering on useless when you get into the dirtiness of the real world.

Implementations are not “secure” in the absolute. Implementations may be secure within a certain threat space, or for a certain set of needs, but security is always relative to some perceived adversary. If your adversary has unlimited resources, then no implementation will ever be secure over a long timescale. (An ‘unlimited resources’ adversary will just build Dyson spheres around a few nearby stars and use them to power computronium bruteforce machines. Good thing you don’t really have an unlimited-resources adversary, do you?)

Security is all about tradeoffs. As you make an implementation more robust, it becomes more cumbersome to use. Computers have done really amazing things to make formerly-cumbersome security easier to use, but this tradeoff still exists and probably will always exist once you start talking about practical attacks.

The implementation standards for government-level security, e.g. the handling of classified information by the US DOD and similar, require electronically shielded rooms and specially vetted equipment to prevent information leakage at the endpoints. But as the last few years have demonstrated, these systems — while extremely impressive and well-constructed — have still leaked information through human factors compromises. So in that sense, anything that involves a person is arguably “insecure”. For most applications, there’s no getting around that.

Beyond that, though, the author does make some good points about users believing that a program is “secure” for the wrong reasons, including buzzword-laden webpages, unverified claims in the media, or endorsement by famous people who do not have a significant reputation in the IT security communtity at stake. These are all real problems that have been exploited to push poorly-designed software onto users who deserve better.

Many modern apps, including not only Telegram and Signal but also Facebook Messenger in its end-to-end encrypted mode, and various corporate email systems, are “secure enough” for particular needs. They’ll almost certainly hide what you’re doing or saying from your family, friends, nosy neighbors, boss (provided you don’t work for an intelligence or law enforcement agency), spouse, etc., which is what I suspect all but a very small fraction of users actually require. So, for most people, they are functionally secure.

For the very small number of users whose activities are likely to cause them to be of interest to modern, well-funded, First World intelligence agenices, essentially no application running on a modern smartphone is going to be secure enough.

As others on HN point out, modern smartphones are essentially “black boxes” running vast amounts of closed-source, unauditable code, including in critical subsystems like the “baseband”. One anonymous user even alleges that:

The modifications installed by your phone company, etc. are not open source. The baseband chip’s firmware is not open sourced. I’ve even heard of DMA being allowed over baseband as part of the Lawful Intercept Protocol.

There is, naturally, no sourcing on the specific claim about DMA over the cellular connection, but that would be a pretty neat trick: it would essentially be one step above remote code execution, and give a remote attacker access to the memory space of any application running on the device, perhaps without any sign (such as a typical rootkit or spyware suite would leave) that the device was tapped. Intriguing.

I am, personally, not really against intelligence agenices having these sort of capabilities. The problem becomes when they are too easy or cheap to use. The CIA’s stash of rootkits and zero-days is unlikely to be deployed except in bona fide (at least, perceived to be bona fide) national security situations, because of the expense involved in obtaining those sort of vulnerabilities and the sharp drop in utility once they’ve been used once. They’re single-shot weapons, basically. If some were to get their way and manage to equip every consumer communications device with a mandatory backdoor, though, it would be only a matter of time before the usage criteria for that backdoor broadened from national security / terrorism scenarios, to serious domestic crimes like kidnapping, and then on down the line until it was being used for run-of-the-mill drug possession cases. And even if you think (and I will strongly disagree, but it’s out of scope for this post) that drug possession cases deserve the availability of those sort of tools, in the process of that trickling-down of capabilities, it would also doubtless fall into the hands of unintended third parties: from the cop who wants to see if their wife or husband are cheating on them, to organized crime, to Internet trolls and drive-by perverts looking for nude photos. Such is the lifecycle of security vulnerabilities: it all ends up in the hands of the “script kiddies” eventually.

Nobody has found a way to break that lifecycle so far: today’s zero-days are tomorrow’s slightly-gifted-highschool’s tools for spying on the girl or boy they’re fancying in class. Intentionally creating large vulnerabilities — which is exactly what a backdoor would be — just means everyone along the food chain would get a bigger meal as it became more and more widely available.

The only solution, as I see it, is to keep doing pretty much what we’ve been doing: keep funding security research to harden devices and platforms, and keep funding the research on the other side of the equation (both in the private sector and via the IC) who try to pick away at it, and hope that the balance remains relatively constant, and similar to what we currently enjoy: enough security for the average person to keep their communications private from those they don’t want to share them with, impressively secure communications for those who want to put in the effort, but enough capabilities on the law-enforcement and intelligence side to keep organized crime and terrorism disrupted in terms of communications.

0 Comments, 0 Trackbacks

[/technology] permalink

Sat, 11 Mar 2017

Way back in 2013, I wrote about the NSA leaks and how I didn’t think that they signified any fundamental change in the balance of power between cryptographers and cryptanalysts that has been going on for centuries. It would seem that the New York Times has finally worked through their backlog and more or less agrees.

(The article in question comes from the AP, so if the NYT website doesn’t want to load or gets paywalled or taken out by a Trump Republic drone strike at some point in the future, you can always just Google the title and turn it up. Probably.)

The tl;dr version:

Documents purportedly outlining a massive CIA surveillance program suggest that CIA agents must go to great lengths to circumvent encryption they can’t break. In many cases, physical presence is required to carry off these targeted attacks. […] It’s much like the old days when “they would have broken into a house to plant a microphone,” said Steven Bellovin, a Columbia University professor who has long studied cybersecurity issues.

In other words, it’s pretty much what we expect the CIA to be doing, and what they’re presumably pretty good at, or at least ought to be pretty good at given the amount of time they’ve had to get good at it.

Which means that I was pretty much on target back in 2013, and the sky-is-falling brigade was not:

My guess […] is that there’s nothing fundamentally wrong with public key crypto, or even in many carefully-built implementations. It’s when you start optmizing for cost or speed that you open the door.

So: if you are very, very careful, you will still be able to build up a reasonably-secure infrastructure using currently available hardware and software. (‘Reasonably secure’ meaning resistant to untargeted mass surveillance, not necessarily to a targeted attack that might include physical bugging: that’s a much higher bar.) However, some code may need to be changed in order to eliminate any reliance on possibly-compromised components, such as hardware RNGs / accelerators that by their nature are difficult to audit.

In the past few years, most widely-used crypto libraries have moved away from hardware PRNGs that are thought to be suspect, and generally taken a less seat-of-the-pants approach to optimizing for speed than was previously (sometimes) the case. For security, this is largely a good thing.

In terms of intelligence-gathering capability, it’s presumably a degradation vs. the mass-intercept capability that certain agencies might have had when more traffic was unencrypted or poorly-encrypted, but it was foolish to believe that situation was going to go on forever. End-to-end crypto has been a goal of the pro-security tech community (formerly and now cringingly referred to as “cypherpunks”, back when that seemed like a cool name) for almost two decades, and would have happened eventually.

The IC still has significant tools at its disposal, including traffic analysis and metadata analysis, targeted bruteforcing of particular messages, encrypted content, or SSL/TLS sessions, endpoint compromises, human factors compromise, and potentially future developments in the quantum cryptography/cryptanalysis space. Without defending or minimizing Snowden et al, I do not think that it means the end of intelligence in any meaningful sense; those predictions, too, were overstated.

Anyway, it’s always nice to get some validation once in a while that the worst predictions don’t always turn out to be the correct ones. (Doesn’t quite make up for my hilariously blown call on the election, but at least I wasn’t alone in that one.)

0 Comments, 0 Trackbacks

[/technology] permalink

Thu, 13 Oct 2016

At some point, Yahoo started sticking a really annoying popup on basically every single Flickr page, if you aren’t logged in with a Yahoo ID. Blocking these popups is reasonably straightforward with uBlock or ABP, but it took me slightly longer than it should have to figure it out.

As usual, here’s the tl;dr version. Add this to your uBlock “My filters”:

! Block annoying Flickr login popups
www.flickr.com##.show.mini-footer.signup-footer

That’s it. Note that this doesn’t really “block” anything, it’s a CSS hiding rule. For this to work you have to ensure that ‘Cosmetic Filters’ in uBlock / uBlock Origin is enabled.

The slightly-longer story as to why this took more than 10 seconds of my time, is because the default uBlock rule that’s created when you right-click on one of the popups and select ‘Block Element’ doesn’t work well. That’s because Yahoo is embedding a bunch of random characters in the CSS for each one, which changes on each page load. (It’s not clear to me whether this is designed expressly to defeat adblockers / popup blockers or not, but it certainly looks a bit like a blackhat tactic.)

Using the uBlock Origin GUI, you have to Ctrl-click (Cmd-click on a Mac) on the top element hiding rule in order to get a ‘genericized’ version of it that removes the full CSS path, and works across page reloads. I’d never dug into any of the advanced features of uBlock Origin before — it’s always just basically worked out of the box, insofar as I needed it to — so this feature was a nice discovery.

Why, exactly, Yahoo is shoving this annoying popup in front of content on virtually every Flickr page, to every non-logged-in viewer, isn’t clear, although we can certainly speculate: Yahoo is probably desperate at this point to get users to log in. Part of their value as a company hinges on the number of active users they can claim. So each person they hard-sell into logging in is some amount more they’ll probably get whenever somebody steps in and buys them.

As a longtime Flickr user, that end can’t come soon enough. It was always disappointing that Flickr sold out to Yahoo at all; somewhere out there, I believe there’s a slightly-less-shitty parallel universe where Google bought Flickr, and Yahoo bought YouTube, and Flickr’s bright and beautiful site culture was saved just as YouTube’s morass of vitrol and intolerance became Yahoo’s problem to moderate. Sadly, we do not live in that universe. (And, let’s be honest, Google would probably have killed off Flickr years ago, along with everything else in their Graveyard of Good Ideas. See also: Google Reader.)

Perhaps once Yahoo is finally sold and broken up for spare parts, someone will realize that Flickr still has some value and put some effort into it, aside from strip-mining it for logins as Yahoo appears to be doing. A man can dream, anyway.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Wed, 14 Sep 2016

Everyone’s favorite security analyst Bruce Schneier seems to think that somebody is learning how to “take down the Internet” by repeatedly probing key pieces of “basic infrastructure” — exactly what’s being probed isn’t stated, but the smart money is on the DNS root servers. Naturally, who is doing this is left unsaid as well, although Schneier does at least hazard the obvious guess at China and Russia.

If this is true, it’s a seemingly sharp escalation towards something that might legitimately be called ‘cyberwarfare’, as opposed to simply spying-using-computers, which is most of what gets lumped in under that label today. Though, it’s not clear exactly why a state-level actor would want to crash DNS; it’s arguably not really “taking down the Internet”, although it would mess up a lot of stuff for a while. Even if you took down the root DNS servers, it wouldn’t stop IP packets from being routed around (the IP network itself is pretty resilient), and operators could pretty quickly unplug their caching DNS resolvers and let them run independently, restoring service to their users. You could create a mess for a while, but it wouldn’t be crippling in the long term.

Except perhaps as one component of a full-spectrum, physical-world attack, it doesn’t make a ton of sense to disrupt a country’s DNS resolvers for a few hours. And Russia and China don’t seem likely to actually attack the U.S. anytime soon; relations with both countries seem to be getting worse over time, but they’re not shooting-war bad yet. So why do it?

The only reason that comes to mind is that it’s less ‘preparation’ than ‘demonstration’. It’s muscle flexing on somebody’s part, and not particularly subtle flexing at that. The intended recipient of the message being sent may not even be the U.S., but some third party: “see what we can do to the U.S., and imagine what we can do to you”.

Or perhaps the eventual goal is to cover for a physical-world attack, but not against the U.S. (where it would probably result in the near-instant nuclear annihilation of everyone concerned). Perhaps the idea is to use a network attack on the U.S. as a distraction, while something else happens in the real world? Grabbing eastern Ukraine, or Taiwan, just as ideas.

Though an attack on the DNS root servers would be inconvenient in the short run, I am not sure that in the long run that it would be the worst thing to happen to the network as an organism: DNS is a known weakness of the global Internet already, one that desperately needs a fix but where there’s not enough motivation to get everyone moving together. An attack would doubtless provide that motivation, and be a one-shot weapon in the process.

Update: This article from back in April, published by the ‘Internet Governance Project’, mentions a Chinese-backed effort to weaken US control over the root DNS, either by creating additional root servers or by potentially moving to a split root. So either the probing or a future actual disruption of DNS could be designed to further this agenda.

In 2014, [Paul] Vixie worked closely with the state-owned registry of China (CNNIC) to promote a new IETF standard that would allow the number of authoritative root servers to increase beyond the current limit of 13. As a matter of technical scalability, that may be a good idea. The problem is its linkage to a country that has long shown a more than passing interest in a sovereign Internet, and in modifying the DNS to help bring about sovereign control of the Internet. For many years, China has wanted its “own” root server. The proposal was not adopted by IETF, and its failure there seems to have prompted the formation and continued work of the YETI-DNS project.

The YETI-DNS project appears, at the moment, to be defunct. Still, China would seem to have the most to gain by making the current U.S.-based root DNS system seem fragile, given the stated goal of obtaining their own root servers.

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 26 Aug 2016

The other day I discovered an interesting Python behavior that I somehow had managed not to hit before — in fairness, I use Python mostly for scripting and automation, not ‘real’ software development, but I still thought I understood the basics reasonably well.

Can you spot the problem? The following is designed to remove words from a list if they are below a certain number of characters, specified by args.minlength:

for w in words:
    if len(w) < int(args.minlength):
        words.remove(w)

The impending misbehavior, if you didn’t catch it by this point, is not necessarily obvious. It won’t barf an error at you, and you can actually get it to pass a trivial test, depending on how the test data is configured. But on a real dataset, you’ll end up with lots of words shorter than args.minlength left in words after you (thought) you iterated through and cleaned them!

(If you want to play with this on your own, try running the above loop against the contents of your personal iSpell dictionary — typically ~/.ispell_english on Unix/Linux — or some other word list. The defect will quickly become apparent.)

A good description to the problem, along with several solutions, is of course found on Stack Overflow. But to save you the click: the problem is iterating over a mutable object, such as a list, and then modifying the list (e.g. by removing items) inside the loop. Per the Python docs, you shouldn’t do that:

If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy.

The solution is easy:

for w in words[:]:
    if len(w) < int(args.minlength):
        words.remove(w)

Adding the slice notation causes Python to iterate over a copy of the list (pre-modification), which is what you actually want most of the time, and then you’re free to modify the actual list all you want from inside the loop. There are lots of other possible solutions if you don’t like the slice notation, but that one seems pretty elegant (and it’s what’s recommended in the Python docs so it’s presumably what someone else reading your code ought to expect).

I’d seen the for item in list[:]: construct in sample code before, but the exact nature of the bugs it prevents hadn’t been clear to me before. Perhaps this will be enlightening to someone else as well.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Mon, 22 Aug 2016

Ars Technica has a nice article, published earlier this month, on the short life of the Digital Compact Cassette format, one of several attempts to replace the venerable analog cassette tape with a digital version, prior to its eventual demise in the download era.

At risk of dating myself, I remember the (very brief) rise and (anticlimactic) fall of the Digital Compact Cassette, although I was a bit poor to be in the target market of early adopters and hi-fi-philes that the first decks were targeted to. And while the Ars article is decent, it ignores the elephant in the room that contributed mightily to DCC’s demise: DRM.

DCC was burdened by a DRM system called SCMS, also present in the consumer version of DAT. This inclusion was not the fault of Philips or Matsushita (later Panasonic), who designed DCC, but a result of an odious RIAA-backed law passed in 1992, the Audio Home Recording Act, which mandated it in all “digital audio recording device[s]”.

It is telling that of the variety of formats which were encumbered by SCMS, exactly zero of them have ever succeeded in the marketplace in a way that threatened the dominant formats. The AHRA was (and remains, de jure, because it’s still out there on the books, a piece of legal “unexploded ordnance” waiting for someone to step on it) the RIAA’s most potent and successful weapon in terms of suppressing technological advancement and maintaining the status quo throughout the 1990s.

Had it not been for the AHRA and SCMS, I think it’s likely that US consumers might have had not one but two alternative formats for digital music besides the CD, and perhaps three: consumer DAT, DCC, and MiniDisc. Of these, DAT is probably the best format from a pure-technology perspective — it squeezes more data into a smaller physical space than the other two, eliminating the need for lossy audio compression — but DAT decks are mechanically complex, owing to their helical scan system, and the smallest portable DATs never got down to Walkman size. DCC, on the other hand, used a more robust linear tape system, and perhaps most importantly it was compatible with analog cassette tapes. I think there is a very good chance that it could have won the battle, if the combatants had been given a chance to take the field.

But the AHRA and SCMS scheme conspired to make both consumer-grade DAT and DCC unappealing. Unlike today, where users have been slowly conditioned to accept that their devices will oppose them at every opportunity in the service of corporations and their revenue streams, audio enthusiasts from the analog era were understandably hostile to the idea that their gear might stop them from doing something it was otherwise quite physically capable of doing, like dubbing from one tape to another, or from a CD to a tape, in the digital domain. And a tax on blank media just made the price premium for digital, as opposed to analog, that much higher. If you are only allowed to make a single generation of copies due to SCMS, and if you’re going to pay extra for the digital media due to the AHRA, why not just get a nice analog deck with Dolby C or DBX Type 2 noise reduction, and spend the savings on a boatload of high-quality Type IV metal cassettes?

That was the question that I remember asking myself anyway, at the time. I never ended up buying a DCC deck, and like most of the world continued listening to LPs, CDs, and analog cassettes right up until cheap computer-based CD-Rs and then MP3 files dragged the world of recorded music fully into the digital age, and out of the shadow of the AHRA.

0 Comments, 0 Trackbacks

[/technology] permalink

Mon, 15 Aug 2016

Very cool open-source project VeraCrypt is all over the news this week, it seems. First when they announced that they were going to perform a formal third-party code audit, and had come up with the funds to pay for it; and then today when they claimed their emails were being intercepted by a “nation-state” level actor.

The audit is great news, and once it’s complete I think we’ll have even more confidence in VeraCrypt as a successor to TrueCrypt (which suffered from a bizarre developer meltdown1 back in 2014).

The case of the missing messages

However, I’m a bit skeptical about the email-interception claim, at least based on the evidence put forward so far. It may be the case — and, let’s face it, should be assumed — that their email really is being intercepted by someone, probably multiple someones. Frankly, if you’re doing security research on a “dual use” tool2 like TrueCrypt and don’t think that your email is being intercepted and analyzed, you’re not participating in the same consensus reality as the rest of us. So, not totally surprising on the whole. Entirely believable.

What is weird, though, is that the evidence for the interception is that some messages have mysteriously disappeared in transit.

That doesn’t really make sense. It doesn’t really make sense from the standpoint of the mysterious nation-state-level interceptor, because making the messages disappear tips your hand, and it also isn’t really consistent with how most modern man-in-the-middle style attacks work. Most MITM attacks require that the attacker be in the middle, that is, talking to both ends of the connection and passing information. You can’t successfully do most TLS-based attacks otherwise. If you’re sophisticated enough to do most of those attacks, you’re already in a position to pass the message through, so why not do it?

There’s no reason not to just pass the message along, and that plus Occam’s Razor is why I think the mysteriously disappearing messages aren’t a symptom of spying at all. I think there’s a much more prosaic explanation. Which is not to say that their email isn’t being intercepted. It probably is. But I don’t think the missing messages are necessarily a smoking gun displaying a nation-state’s interest.

Another explanation

An alternative, if more boring, explanation to why some messages aren’t going through has to do with how Gmail handles outgoing email. Most non-Gmail mailhosts have entirely separate servers for incoming and outgoing mail. Outgoing mail goes through SMTP servers, while incoming mail is routed to IMAP (or sometimes POP) servers. The messages users see when looking at their mail client (MUA) are all stored on the incoming server. This includes, most critically, the content of the “Sent” folder.

In order to show you messages that you’ve sent, the default configuration of many MUAs, including Mutt and older versions of Apple Mail and Microsoft Outlook, is to save a copy of the outgoing message in the IMAP server’s “Sent” folder at the same time that it’s sent to the SMTP server for transmission to the recipient.

This is a reasonable default for most ISPs, but not for Gmail. Google handles outgoing messages a bit differently, and their SMTP servers have more-than-average intelligence for an outgoing mail server. If you’re a Gmail user and you send your outgoing mail using a Gmail SMTP server, the SMTP server will automatically communicate with the IMAP server and put a copy of the outgoing message into your “Sent” folder. Pretty neat, actually. (A nice effect of this is that you get a lot more headers on your sent messages than you’d get by doing the save-to-IMAP route.)

So as a result of Gmail’s behavior, virtually all Gmail users have their MUAs configured not to save copies of outgoing messages via IMAP, and depend on the SMTP server to do it instead. This avoids duplicate messages ending up in the “Sent” folder, a common problem with older MUAs.

This is all fine, but it does have one odd effect: if your MUA is configured to use Gmail’s SMTP servers and then you suddenly use a different, non-Google SMTP server for some reason, you won’t get the sent messages in your “Sent” box anymore. All it takes is an intermittent connectivity problem to Google’s servers, causing the MUA to fail over to a different SMTP server (maybe an old ISP SMTP or some other configuration), and messages won’t show up anymore. And if the SMTP server it rolls over to isn’t correctly configured, messages might just get silently dropped.

I know this, because it’s happened to me: I have Gmail’s SMTP servers configured as primary, but also have my ISPs SMTP set up in my MUA, because I have to use them for some other email accounts that don’t come with a non-port-25 SMTP server (and my ISP helpfully blocks outgoing connections on port 25). It’s probably not an uncommon configuration at all.

Absent some other evidence that the missing messages are being caused by a particular attack (and it’d have to be a fairly blunt one, which makes me think someone less competent than nation-state actors), I think it’s easier to chalk the behavior up to misconfiguration than to enemy action.

Ultimately though, it doesn’t really matter, because everyone ought to be acting as though their messages are going to be intercepted as they go over the wire anyway. The Internet is a public network: by definition, there’s no security guarantees in transit. If you want to prevent snooping, the only solution is end-to-end crypto combined with good endpoint hygiene.

Here’s wishing all the best to the VeraCrypt team as they work towards the code audit.

1: Those looking for more information on the TrueCrypt debacle can refer to this Register article or this MetaFilter discussion, both from mid-2014. This 2015 report may also be of interest. But as far as I know, the details of what happened to the developers to prompt the project’s digital self-immolation are still unknown and speculation abounds about the security of the original TrueCrypt.

2: “Dual use” in the sense that it is made available for use by anyone, and can be therefore used for both legitimate/legal and illegitimate/illegal purposes. I think it goes almost without saying that most people in the open-source development community accept the use of their software by bad actors as simply a cost of doing business and a reasonable trade-off for freedom, but this is clearly not an attitude that is universally shared by governments.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Fri, 12 Aug 2016

The work I’ve been doing with Tvheadend to record and time-shift ATSC broadcast television got me thinking about my pile of old NTSC tuner cards, leftover from my MythTV system designed for recording analog cable TV. These NTSC cards aren’t worth much, now that both OTA broadcast and most cable systems have shifted completely over to ATSC and QAM digital modulation schemes, except in one regard: they ought to be able to still receive FM broadcasts.

Since the audio component of NTSC TV transmissions is basically just FM, and the NTSC TV bands completely surround the FM broadcast band on both sides, any analog TV reciever should have the ability to receive FM audio as well — at least in mono (FM stereo and NTSC stereo were implemented differently, the latter with a system called MTS). But of course whether this is actually possible depends on the tuner card’s implementation.

I haven’t plugged in one of my old Hauppage PCI tuner cards yet, although they may not work because they contain an onboard MPEG-2 hardware encoder — a feature I paid dearly for, a decade ago, because it reduces the demand on the host system’s processor for video encoding significantly — and it wouldn’t surprise me if the encoder failed to work on an audio-only signal. My guess is that the newer cards which basically just grab a chunk of spectrum and digitize it, leaving all (or most) of the demodulation to the host computer, will be a lot more useful.

I’m not the first person to think that having a ‘TiVo for radio’ would be a neat idea, although Googling for anything in that vein gets you a lot of resources devoted to recording Internet “radio” streams (which I hate referring to as “radio” at all). There have even been dedicated hardware gadgets sold from time to time, designed to allow FM radio timeshifting and archiving.

  • Linux based Radio Timeshifting is a very nice article, written back in 2003, by Yan-Fa Li. Some of the information in it is dated now, and of course modern hardware doesn’t even break a sweat doing MP3 encoding in real time. But it’s still a decent overview of the problem.
  • This Slashdot article on radio timeshifting, also from 2003 (why was 2003 such a high-water-mark for interest in radio recording?), still has some useful information in it as well.
  • The /drivers/media/radio tree in the Linux kernel contains drivers for various varieties of FM tuners. Some of the supported devices are quite old (hello, ISA bus!) while some of them are reasonably new and not hard to find on eBay.

Since I have both a bunch of old WinTV PCI cards and a newer RTL2832U SDR dongle, I’m going to try to investigate both approaches: seeing if I can use the NTSC tuner as an over-engineered FM reciever, and if that fails maybe I’ll play around with RTL-SDR and see if I can get that to receive FM broadcast at reasonable quality.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Thu, 11 Aug 2016

Echoing the theme of an article I read yesterday, about the FCC’s intentional — or at best negligent — duopoly in wired broadband, is this article about the current “5G” hype, and how it seems to be assisting the big telcos in disguising their under-investment in FTTH / FTTP in favor of more-profitable wireless services:

The Next Generation of Wireless — “5G” — Is All Hype

The author writes:

Cynics might point out that by waving their hands around about the coming miracle of 5G — even though its arrival is really a long way off — carriers are directing attention away from the terrible state of fiber last-mile infrastructure in the US. Call me one of those cynics. This kind of misleading tactic isn’t difficult to pull off in the U.S. […] A leading tech VC in New York, someone who is viewed as a thought leader, said to me not long ago, “Why do you keep talking about fiber? Everything’s going wireless.”

This is eerily similar to claims used by the telco and cablecos to justify diminished regulation, by pointing to BPL. The major justification for eliminating ‘unbundling’ regulation, and for not applying it to cable lines at all, was because consumers were going to be able to obtain Internet service over a variety of last-mile circuits, including cable lines, telephone lines, fiber, and power wiring. This, of course, was horseshit — BPL was always a terrible idea — but it was just plausible-enough to keep the regulators at bay while the market condensed into a duopoly.

Given that the telecommunications companies want nothing other than to extract maximum economic rents from consumers for as long as they can, while investing as little as they possibly can for the privilege — this is how corporations work, of course, so we shouldn’t be especially surprised — we should treat the 5G hype with suspicion.

No currently-foreseeable wireless technology is going to reduce the need for high-bandwidth (read: fiber-optic) backhaul; 5G as envisioned by most rational people would, in fact, vastly increase the demand for backhaul and the need for FTTH/FTTP. Be on guard for anyone who suggests that 5G will make investments in fiber projects — especially muni fiber — unnecessary, as they are almost certainly trying to sell you something, and probably nothing you want to buy.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink