Kadin2048's Weblog
2016
Months
AugSep
Oct Nov Dec

RSS

Fri, 26 Aug 2016

Bruce Schneier has a new article about the NSA’s basically-all-but-confirmed stash of ‘zero day’ vulnerabilities on his blog, and it’s very solid, in typical Bruce Schneier fashion.

The NSA Is Hoarding Vulnerabilities

I won’t really try to recap it here, because it’s already about as concise as one can be about the issue. However, there is one thing in his article that I find myself mulling over, which is his suggestion that we should break up the NSA:

And as long as I’m dreaming, we really need to separate our nation’s intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency’s mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS’s mission.

Far be it from me to second-guess Schneier on most topics, but that just doesn’t seem to make a whole lot of sense. If the key problem is that vulnerabilities are being hoarded for offensive use rather than being shared with manufacturers (defensive use), it doesn’t seem like splitting those two missions into separate agencies is going to improve things. And the predictable result is that we’re then going to have two separate agencies working against one another, doing essentially the same research, looking for the same underlying vulnerabilities, for different aims. That seems… somewhat inefficient.

And if history is any guide, the U.S. will probably spend more on offensive armaments than on defense. Contrary to the Department of Defense’s name, since the end of WWII we have based our national-defense posture largely on a policy of force projection and deterrence-through-force, and I am highly skeptical that, as a nation, we’re going to suddenly take a different tack when it comes to “cyberwarfare” / IT security. The tension between offense and defense isn’t unique to IT: it exists in lots of other places, from ICBMs to vehicle armor, and in most cases U.S. doctrine emphasizes the offensive, force-projective capability. This is practically a defining element of U.S. strategic doctrine over the past 60 years.

So the net result of Schneier’s proposal would probably be to take the gloves off the NSA: relieve it of the defensive mission completely, giving it to DHS — which hardly seems capable of taking on a robust cyberdefense role, but let’s ignore that for the sake of polite discussion — but almost certainly emerge with its funding and offensive role intact. (Or even if there was a temporary shift in funding, since our national adversaries have, and apparently make use of, offensive cyberwarfare capabilities, it would only be a matter of time until we felt a ‘cyber gap’ and turned on the funding tap again.) This doesn’t seem like a net win from a defense standpoint.

I’ll go further, admittedly speculation: I suspect that the package of vulnerabilities (dating from 2013) that are currently being “auctioned” by the group calling themselves the Shadow Brokers probably owe their nondisclosure to some form of internal firewalling within NSA as an organization. That is to say, the sort of offensive/defensive separation that Schneier is seemingly proposing at a national level probably exists within NSA already and is related to why the zero-day vulnerabilities weren’t disclosed. We’ll probably never know for sure, but it wouldn’t surprise me if someone was hoarding the vulnerabilities within or for a particular team or group, perhaps in order to prevent them from being subject to an “equities review” process that might decide they were better off being disclosed.

What we need is more communication, not less, and we need to make the communication flow in a direction that leads to public disclosure and vulnerability remediation in a timely fashion, while also realistically acknowledging the demand for offensive capacity. Splitting up the NSA wouldn’t help that.

However, in the spirit of “modest proposals”, a change in leadership structure might: currently, the Director of the NSA is also the Commander of the U.S. Cyber Command and Chief of the Central Security Service. It’s not necessarily clear to me that having all those roles, two-thirds of which are military and thus tend to lean ‘offensive’ rather than ‘defensive’, reside in the same person is ideal, and perhaps some thought should be given to having the NSA Director come from outside the military, if the goal is to push the offensive/defensive pendulum back in the opposite direction.

0 Comments, 0 Trackbacks

[/politics] permalink

The other day I discovered an interesting Python behavior that I somehow had managed not to hit before — in fairness, I use Python mostly for scripting and automation, not ‘real’ software development, but I still thought I understood the basics reasonably well.

Can you spot the problem? The following is designed to remove words from a list if they are below a certain number of characters, specified by args.minlength:

for w in words:
    if len(w) < int(args.minlength):
        words.remove(w)

The impending misbehavior, if you didn’t catch it by this point, is not necessarily obvious. It won’t barf an error at you, and you can actually get it to pass a trivial test, depending on how the test data is configured. But on a real dataset, you’ll end up with lots of words shorter than args.minlength left in words after you (thought) you iterated through and cleaned them!

(If you want to play with this on your own, try running the above loop against the contents of your personal iSpell dictionary — typically ~/.ispell_english on Unix/Linux — or some other word list. The defect will quickly become apparent.)

A good description to the problem, along with several solutions, is of course found on Stack Overflow. But to save you the click: the problem is iterating over a mutable object, such as a list, and then modifying the list (e.g. by removing items) inside the loop. Per the Python docs, you shouldn’t do that:

If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy.

The solution is easy:

for w in words[:]:
    if len(w) < int(args.minlength):
        words.remove(w)

Adding the slice notation causes Python to iterate over a copy of the list (pre-modification), which is what you actually want most of the time, and then you’re free to modify the actual list all you want from inside the loop. There are lots of other possible solutions if you don’t like the slice notation, but that one seems pretty elegant (and it’s what’s recommended in the Python docs so it’s presumably what someone else reading your code ought to expect).

I’d seen the for item in list[:]: construct in sample code before, but the exact nature of the bugs it prevents hadn’t been clear to me before. Perhaps this will be enlightening to someone else as well.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Tue, 23 Aug 2016

About fifty pages into John Bruce Medaris’s 1960 autobiography Countdown for Decision, there is an unsourced quote attributed to Col. C.G. Patterson, who in 1944 was in charge of Anti-Aircraft Artillery for the U.S. First Army, outlining the concept of a “technological casualty”:

“If a weapon costs more to build, in money, materials, and manpower, than it costs the enemy to repair the damage the weapon causes, the user has suffered a technological casualty. In any long-drawn-out struggle this might be the margin between victory and defeat.” 1

As far as I can tell, the term “technological casualty” never passed into general usage with that meaning, which is unfortunate. And although sources do confirm that Col. Patterson existed and by all accounts served admirably as the commander of air defense artillery for First Army in 1944, there doesn’t appear to be much record outside of Medaris’ book of the quote. Still, credit where it is most likely due; if ever a shorthand name for this idea is required, I might humbly suggest “Patterson’s Dictum”. (It also sounds good.)

I suspect, given Patterson’s role at the time, that the original context of the quote had to do with offensive or defensive air capability. Perhaps it referred to the attrition of German capability that was at that point ongoing. In Countdown, Medaris discusses it in the context of the V-2, which probably consumed more German war resources to create than they destroyed of Allied ones. But it is certainly applicable more broadly.

On its face, Patterson’s statement assumes a sort of attritative, clash-of-civilizations, total-commitment warfare, where all available resources of one side are stacked against all available resources of the other. One might contend that it doesn’t seem to have much applicability in the age of asymmetric warfare, now that we have a variety of examples of conflicts ending in a victory — in the classic Clausewitzian political sense — by parties who never possessed any sort of absolute advantage in money, materials, or manpower.

But I would counter that even in the case of a modern asymmetric war, or realpolitik-fueled ‘brushfire’ conflicts with limited aims, the fundamental calculus of war still exists, it just isn’t as straightforward. Beneath all the additional terms that get added to the equation is the essential fact that defeat is always possible if victory proves too expensive. Limited war doesn’t require you outspend your adversary’s entire society, only their ‘conflict budget’: their willingness to expend resources in that particular conflict.

Which makes Patterson’s point quite significant: if a modern weapons system can’t subtract as much from an adversary’s ‘conflict budget’ — either through actual destructive power, deterrence, or some other effect — as it subtracts from ours in order to field it (including the risk of loss), then it is essentially a casualty before it ever arrives.

1: Countdown for Decision (1960 ed.), page 51.

0 Comments, 0 Trackbacks

[/politics] permalink

Mon, 22 Aug 2016

Ars Technica has a nice article, published earlier this month, on the short life of the Digital Compact Cassette format, one of several attempts to replace the venerable analog cassette tape with a digital version, prior to its eventual demise in the download era.

At risk of dating myself, I remember the (very brief) rise and (anticlimactic) fall of the Digital Compact Cassette, although I was a bit poor to be in the target market of early adopters and hi-fi-philes that the first decks were targeted to. And while the Ars article is decent, it ignores the elephant in the room that contributed mightily to DCC’s demise: DRM.

DCC was burdened by a DRM system called SCMS, also present in the consumer version of DAT. This inclusion was not the fault of Philips or Matsushita (later Panasonic), who designed DCC, but a result of an odious RIAA-backed law passed in 1992, the Audio Home Recording Act, which mandated it in all “digital audio recording device[s]”.

It is telling that of the variety of formats which were encumbered by SCMS, exactly zero of them have ever succeeded in the marketplace in a way that threatened the dominant formats. The AHRA was (and remains, de jure, because it’s still out there on the books, a piece of legal “unexploded ordnance” waiting for someone to step on it) the RIAA’s most potent and successful weapon in terms of suppressing technological advancement and maintaining the status quo throughout the 1990s.

Had it not been for the AHRA and SCMS, I think it’s likely that US consumers might have had not one but two alternative formats for digital music besides the CD, and perhaps three: consumer DAT, DCC, and MiniDisc. Of these, DAT is probably the best format from a pure-technology perspective — it squeezes more data into a smaller physical space than the other two, eliminating the need for lossy audio compression — but DAT decks are mechanically complex, owing to their helical scan system, and the smallest portable DATs never got down to Walkman size. DCC, on the other hand, used a more robust linear tape system, and perhaps most importantly it was compatible with analog cassette tapes. I think there is a very good chance that it could have won the battle, if the combatants had been given a chance to take the field.

But the AHRA and SCMS scheme conspired to make both consumer-grade DAT and DCC unappealing. Unlike today, where users have been slowly conditioned to accept that their devices will oppose them at every opportunity in the service of corporations and their revenue streams, audio enthusiasts from the analog era were understandably hostile to the idea that their gear might stop them from doing something it was otherwise quite physically capable of doing, like dubbing from one tape to another, or from a CD to a tape, in the digital domain. And a tax on blank media just made the price premium for digital, as opposed to analog, that much higher. If you are only allowed to make a single generation of copies due to SCMS, and if you’re going to pay extra for the digital media due to the AHRA, why not just get a nice analog deck with Dolby C or DBX Type 2 noise reduction, and spend the savings on a boatload of high-quality Type IV metal cassettes?

That was the question that I remember asking myself anyway, at the time. I never ended up buying a DCC deck, and like most of the world continued listening to LPs, CDs, and analog cassettes right up until cheap computer-based CD-Rs and then MP3 files dragged the world of recorded music fully into the digital age, and out of the shadow of the AHRA.

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 16 Aug 2016

Bloomberg’s Matt Levine has a great article, published today, which begins with a discussion of the apparently-hollow shell company “Neuromama” (OTC: NERO), which — cue shocked face — is probably not in reality a $35 billion USD company, but quickly moves into a delightful discussion of insider trading, money market rates, an “underpants gnomes”-worthy business plan, and the dysfunction of the Commodity Futures Trading Commission. There’s even a bonus mention of Uber shares trading on the secondary market, which is something I’ve written about before. Definitely worth a read:

Heavy Ion Fusion and Insider Trading

If you only read one section of it, the part on “When is insider trading a crime?” is, in my humble opinion, probably the best. (Memo to self: next time there’s a big insider-trading scandal, be sure to come back to this.) But really, it’s a good article. Okay, there’s a bit too much gloating about those stupid regulators and their stupid regulations for someone who isn’t a hedge fund manager to get excited about, but it’s fucking Bloomberg, that’s probably a contractual obligation to get printed there. Also it’s Congress’ fault anyway, as usual.

0 Comments, 0 Trackbacks

[/finance] permalink

Mon, 15 Aug 2016

Very cool open-source project VeraCrypt is all over the news this week, it seems. First when they announced that they were going to perform a formal third-party code audit, and had come up with the funds to pay for it; and then today when they claimed their emails were being intercepted by a “nation-state” level actor.

The audit is great news, and once it’s complete I think we’ll have even more confidence in VeraCrypt as a successor to TrueCrypt (which suffered from a bizarre developer meltdown1 back in 2014).

The case of the missing messages

However, I’m a bit skeptical about the email-interception claim, at least based on the evidence put forward so far. It may be the case — and, let’s face it, should be assumed — that their email really is being intercepted by someone, probably multiple someones. Frankly, if you’re doing security research on a “dual use” tool2 like TrueCrypt and don’t think that your email is being intercepted and analyzed, you’re not participating in the same consensus reality as the rest of us. So, not totally surprising on the whole. Entirely believable.

What is weird, though, is that the evidence for the interception is that some messages have mysteriously disappeared in transit.

That doesn’t really make sense. It doesn’t really make sense from the standpoint of the mysterious nation-state-level interceptor, because making the messages disappear tips your hand, and it also isn’t really consistent with how most modern man-in-the-middle style attacks work. Most MITM attacks require that the attacker be in the middle, that is, talking to both ends of the connection and passing information. You can’t successfully do most TLS-based attacks otherwise. If you’re sophisticated enough to do most of those attacks, you’re already in a position to pass the message through, so why not do it?

There’s no reason not to just pass the message along, and that plus Occam’s Razor is why I think the mysteriously disappearing messages aren’t a symptom of spying at all. I think there’s a much more prosaic explanation. Which is not to say that their email isn’t being intercepted. It probably is. But I don’t think the missing messages are necessarily a smoking gun displaying a nation-state’s interest.

Another explanation

An alternative, if more boring, explanation to why some messages aren’t going through has to do with how Gmail handles outgoing email. Most non-Gmail mailhosts have entirely separate servers for incoming and outgoing mail. Outgoing mail goes through SMTP servers, while incoming mail is routed to IMAP (or sometimes POP) servers. The messages users see when looking at their mail client (MUA) are all stored on the incoming server. This includes, most critically, the content of the “Sent” folder.

In order to show you messages that you’ve sent, the default configuration of many MUAs, including Mutt and older versions of Apple Mail and Microsoft Outlook, is to save a copy of the outgoing message in the IMAP server’s “Sent” folder at the same time that it’s sent to the SMTP server for transmission to the recipient.

This is a reasonable default for most ISPs, but not for Gmail. Google handles outgoing messages a bit differently, and their SMTP servers have more-than-average intelligence for an outgoing mail server. If you’re a Gmail user and you send your outgoing mail using a Gmail SMTP server, the SMTP server will automatically communicate with the IMAP server and put a copy of the outgoing message into your “Sent” folder. Pretty neat, actually. (A nice effect of this is that you get a lot more headers on your sent messages than you’d get by doing the save-to-IMAP route.)

So as a result of Gmail’s behavior, virtually all Gmail users have their MUAs configured not to save copies of outgoing messages via IMAP, and depend on the SMTP server to do it instead. This avoids duplicate messages ending up in the “Sent” folder, a common problem with older MUAs.

This is all fine, but it does have one odd effect: if your MUA is configured to use Gmail’s SMTP servers and then you suddenly use a different, non-Google SMTP server for some reason, you won’t get the sent messages in your “Sent” box anymore. All it takes is an intermittent connectivity problem to Google’s servers, causing the MUA to fail over to a different SMTP server (maybe an old ISP SMTP or some other configuration), and messages won’t show up anymore. And if the SMTP server it rolls over to isn’t correctly configured, messages might just get silently dropped.

I know this, because it’s happened to me: I have Gmail’s SMTP servers configured as primary, but also have my ISPs SMTP set up in my MUA, because I have to use them for some other email accounts that don’t come with a non-port-25 SMTP server (and my ISP helpfully blocks outgoing connections on port 25). It’s probably not an uncommon configuration at all.

Absent some other evidence that the missing messages are being caused by a particular attack (and it’d have to be a fairly blunt one, which makes me think someone less competent than nation-state actors), I think it’s easier to chalk the behavior up to misconfiguration than to enemy action.

Ultimately though, it doesn’t really matter, because everyone ought to be acting as though their messages are going to be intercepted as they go over the wire anyway. The Internet is a public network: by definition, there’s no security guarantees in transit. If you want to prevent snooping, the only solution is end-to-end crypto combined with good endpoint hygiene.

Here’s wishing all the best to the VeraCrypt team as they work towards the code audit.

1: Those looking for more information on the TrueCrypt debacle can refer to this Register article or this MetaFilter discussion, both from mid-2014. This 2015 report may also be of interest. But as far as I know, the details of what happened to the developers to prompt the project’s digital self-immolation are still unknown and speculation abounds about the security of the original TrueCrypt.

2: “Dual use” in the sense that it is made available for use by anyone, and can be therefore used for both legitimate/legal and illegitimate/illegal purposes. I think it goes almost without saying that most people in the open-source development community accept the use of their software by bad actors as simply a cost of doing business and a reasonable trade-off for freedom, but this is clearly not an attitude that is universally shared by governments.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Fri, 12 Aug 2016

The work I’ve been doing with Tvheadend to record and time-shift ATSC broadcast television got me thinking about my pile of old NTSC tuner cards, leftover from my MythTV system designed for recording analog cable TV. These NTSC cards aren’t worth much, now that both OTA broadcast and most cable systems have shifted completely over to ATSC and QAM digital modulation schemes, except in one regard: they ought to be able to still receive FM broadcasts.

Since the audio component of NTSC TV transmissions is basically just FM, and the NTSC TV bands completely surround the FM broadcast band on both sides, any analog TV reciever should have the ability to receive FM audio as well — at least in mono (FM stereo and NTSC stereo were implemented differently, the latter with a system called MTS). But of course whether this is actually possible depends on the tuner card’s implementation.

I haven’t plugged in one of my old Hauppage PCI tuner cards yet, although they may not work because they contain an onboard MPEG-2 hardware encoder — a feature I paid dearly for, a decade ago, because it reduces the demand on the host system’s processor for video encoding significantly — and it wouldn’t surprise me if the encoder failed to work on an audio-only signal. My guess is that the newer cards which basically just grab a chunk of spectrum and digitize it, leaving all (or most) of the demodulation to the host computer, will be a lot more useful.

I’m not the first person to think that having a ‘TiVo for radio’ would be a neat idea, although Googling for anything in that vein gets you a lot of resources devoted to recording Internet “radio” streams (which I hate referring to as “radio” at all). There have even been dedicated hardware gadgets sold from time to time, designed to allow FM radio timeshifting and archiving.

  • Linux based Radio Timeshifting is a very nice article, written back in 2003, by Yan-Fa Li. Some of the information in it is dated now, and of course modern hardware doesn’t even break a sweat doing MP3 encoding in real time. But it’s still a decent overview of the problem.
  • This Slashdot article on radio timeshifting, also from 2003 (why was 2003 such a high-water-mark for interest in radio recording?), still has some useful information in it as well.
  • The /drivers/media/radio tree in the Linux kernel contains drivers for various varieties of FM tuners. Some of the supported devices are quite old (hello, ISA bus!) while some of them are reasonably new and not hard to find on eBay.

Since I have both a bunch of old WinTV PCI cards and a newer RTL2832U SDR dongle, I’m going to try to investigate both approaches: seeing if I can use the NTSC tuner as an over-engineered FM reciever, and if that fails maybe I’ll play around with RTL-SDR and see if I can get that to receive FM broadcast at reasonable quality.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Thu, 11 Aug 2016

Echoing the theme of an article I read yesterday, about the FCC’s intentional — or at best negligent — duopoly in wired broadband, is this article about the current “5G” hype, and how it seems to be assisting the big telcos in disguising their under-investment in FTTH / FTTP in favor of more-profitable wireless services:

The Next Generation of Wireless — “5G” — Is All Hype

The author writes:

Cynics might point out that by waving their hands around about the coming miracle of 5G — even though its arrival is really a long way off — carriers are directing attention away from the terrible state of fiber last-mile infrastructure in the US. Call me one of those cynics. This kind of misleading tactic isn’t difficult to pull off in the U.S. […] A leading tech VC in New York, someone who is viewed as a thought leader, said to me not long ago, “Why do you keep talking about fiber? Everything’s going wireless.”

This is eerily similar to claims used by the telco and cablecos to justify diminished regulation, by pointing to BPL. The major justification for eliminating ‘unbundling’ regulation, and for not applying it to cable lines at all, was because consumers were going to be able to obtain Internet service over a variety of last-mile circuits, including cable lines, telephone lines, fiber, and power wiring. This, of course, was horseshit — BPL was always a terrible idea — but it was just plausible-enough to keep the regulators at bay while the market condensed into a duopoly.

Given that the telecommunications companies want nothing other than to extract maximum economic rents from consumers for as long as they can, while investing as little as they possibly can for the privilege — this is how corporations work, of course, so we shouldn’t be especially surprised — we should treat the 5G hype with suspicion.

No currently-foreseeable wireless technology is going to reduce the need for high-bandwidth (read: fiber-optic) backhaul; 5G as envisioned by most rational people would, in fact, vastly increase the demand for backhaul and the need for FTTH/FTTP. Be on guard for anyone who suggests that 5G will make investments in fiber projects — especially muni fiber — unnecessary, as they are almost certainly trying to sell you something, and probably nothing you want to buy.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

About a decade ago, I spent a slightly-absurd amount of time building a MythTV system for my house. It was pretty awesome, for the time: basically a multi-player distributed DVR. It could record 3 programs off of cable TV simultaneously, while also letting up to 3 people play back recordings on different TVs in the house.

It lasted up until we moved, and I didn’t have the time to get everything hooked up and working again. By that point commercial streaming services had started to take off, digital cable had reduced the amount of programming that you could easily record with an inexpensive NTSC tuner card, and cable TV prices had crept up to the point where I was looking for a way to watch less cable TV rather than more. We made the switch to an all-IP TV system (Netflix, Hulu, Amazon Prime) a while later, and never really looked back.

But recently, spurred mostly by a desire to watch the Olympics live — a desire left largely unfulfilled by NBC; thanks, guys — we got an antenna and hit the ‘auto scan’ button. The number of free over-the-air (OTA) ATSC TV stations we could receive was a pleasant surprise, especially someone who grew up in a rural area and still thought of ‘broadcast TV’ as a haze of static on a good day.

Knowing that there were 30+ free OTA channels (when you count digital subchannels) available in my house, for the cost of only an antenna, got me looking back at the state of Linux-based PVR and timeshifting software.

MythTV, of course, is still around. But if my past experience is any guide, it’s not something you just casually set up. Also, it still seems to be designed with the idea of a dedicated HTPC in mind: basically, to use it, you’ll want a Linux PC running MythTV connected directly to your TV. The MythTV clients for STBs like the Roku or Amazon Prime stick seem pretty immature, as does the Plex plugin. Although I may end up coming back to it, I really didn’t feel like going back down the MythTV road if there were lighter-weight options for recording OTA TV and serving it up.

Enter Tvheadend, which seems to be a more streamlined approach. Rather than offering an entire client/server solution for DVRing, content management, and HTPC viewing, it’s just the DVR and, to a lesser extent, content management. The idea is that you set up and schedule recordings via a web interface to your server, and then the server makes those recordings available via DLNA to streaming devices on the network.

It’s in no way a complete replacement for MythTV, but it seemed to talk to my HDHomeRun (the original two-tuner ATSC/QAM model, not one of the newer ones with built-in DLNA) with very little configuration at all. So far, I’ve got it working to the point where I can watch live TV using VLC on a client machine in the house, and I’m just now starting to work on getting an Electronic Program Guide (EPG) set up, a bit tricky in the US due to the different format used for OTA metadata vs. in Europe, where most of the project’s developers seem to be located.

Anyway, I’m glad to see that MythTV is still apparently going strong, and if I had more time I’d definitely love to cobble together a DIY all-IP home video distribution and centralized PVR system again. But given limited time for side projects these days, Tvheadend seems to fit the bill for a lighter-weight OTA network recorder.

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 10 Aug 2016

“America’s Intentional Broadband Duopoly” by Dane Jasper, writing on the blog of Sonic.net Inc., an ambitious Gigabit ISP, is one of the best summaries of why US broadband is the way it is that I’ve read. If you live in the US and use the Internet, it’s worth reading, just to understand why your Internet access options suck so damn badly compared to the rest of the civilized world.

Spoiler Alert: It is not, as telco / cableco apologists sometimes attest, a function of geography or population density — there are ample examples of countries with both more challenging geography or less-dense populations with far better, and cheaper, Internet service. (And the population density is really a red herring when you realize that most of the US population lives in areas that are pretty dense, like the Eastern Seaboard, which is comparable to Europe.) The answer is a sad combination of political lobbying, regulatory capture, and technological false promises.

In case their site goes down at some point in the future, here’s a link to the Internet Archive’s cached version.

Via MetaFilter.

0 Comments, 0 Trackbacks

[/politics] permalink