Kadin2048's Weblog
2016
Months
Oct

RSS

Thu, 20 Oct 2016

For all the stupidity of the current Presidential election, one interesting discussion that it has prompted is a resurrection of the old debate over nuclear strategy, and particularly the strategy of “launch under attack” (aka and better known as “Launch On Warning”). Jeffrey Lewis has an article, “Our Nuclear Procedures Are Crazier Than Trump”, in Foreign Policy which ties this into current events, prompted by recent statements by both candidates.

Much of the discussion in the last 24 hours has centered on whether Hillary Clinton inadvertently disclosed classified information when she mentioned, during the third debate, that the President would have only “four minutes” to decide on whether to respond in the event of a large-scale attack on the continental U.S. by an adversary. This is not, at least to me, a particularly interesting discussion; nothing Clinton said goes beyond what is in the open literature on the topic and has been available for decades.

What is interesting is that, in 2016, we’re talking about Launch On Warning at all. Clinton’s “four minutes” should be a thing of the past.

I mean: the other President Clinton supposedly moved the U.S. away from LOW in a 1997 Presidential Directive, instead putting U.S. forces on a stance of second-strike retaliation only after actually being on the receiving end of a successful attack. This is a reasonable posture, given that the U.S. SSBN force alone has enough destructive power to serve, independently of the rest of the ‘nuclear triad’, as a reasonable deterrent against a first strike by another global power.

What’s interesting is that, at the time, the Clinton administration downplayed the move and said that it was merely a continuation of existing policy dating from the Reagan years and expressed in previous PDDs. A Clinton spokesperson reportedly said at the time: “in this PDD we direct our military forces to continue to posture themselves in such a way as to not rely on launch on warning—to be able to absorb a nuclear strike and still have enough force surviving to constitute credible deterrence.” (Emphasis mine.)

The actual Presidential Directives are, as one might expect, still classified, so we don’t have a lot other than hearsay and the statements of various spokespeople to go off of. But it would appear safe to say that the U.S. has not depended on LOW since at least 1997, and probably since some point in the 80s. I think it’s likely that the original change was prompted by a combination of near-miss events in the 1970s (e.g. Zbigniew Brzezinski’s infamous 3 A.M. wakeup call on November 9, 1979), plus the maturation of the modern SSBN force into a viable second-strike weapon, which together caused U.S. leaders to question the wisdom of keeping the nuclear deterrent on a hair trigger. As well they probably should have, given the risks.

In fact, being able to lower the proverbial hammer and relax the national trigger finger somewhat is probably the biggest benefit of having an SSBN force. It’s why other nuclear powers, notably the U.K., have basically abandoned ground-based nuclear launch systems in favor of relying exclusively on submarines for deterrence. The U.K., famously, issues “Letters of Last Resort” to their submarine captains, potentially giving them launch authority even in the absence of any external command and control structure — ensuring a retaliatory capability even in the event of complete annihilation of the U.K. itself. While this places a lot of responsibility on the shoulders of a handful of submarine captains, it also relieves the entire U.K. defense establishment from having to plan for and absorb a decapitation attack, and it certainly seems like a better overall plan than automated systems that might be designed to do the same thing.

In the U.S. we’ve never gone as far as the U.K. in terms of delegation of nuclear-launch authority (perhaps because the size of the U.S. nuclear deterrent would mean an unacceptable number of trusted individuals would be required), but it’s been a while since any President has necessarily needed to decide whether to end the world or face unilateral annihilation in a handful of minutes. They would need to potentially decide whether to authorize a U.S. ICBM launch in that very short window of time, but they wouldn’t lose all retailiatory capacity if they chose not to, and it is difficult to imagine — given the possibility and actual past experience with false alarms — that a sane president would authorize a launch before confirmation of an actual attack on U.S. soil.

So why did the “four minute” number resurface at all? That’s a bit of a mystery. It could have just been a debate gambit by Clinton, which is admittedly the simplest explanation, or perhaps the idea of Launch On Warning isn’t completely gone from U.S. strategic policy. This is not implausible, since we still maintain a land-based ICBM force, and the ICBMs are still subject to the first-strike advantage which produced Launch On Warning in the first place.

And rather than debating the debate, which will be a moot point in a very few weeks, the real question we ought to be asking is why we bother to maintain the land-based strategic nuclear ICBM force at all.

Here’s a modest proposal: retire the ICBM force’s strategic nuclear warheads, but retain the missile airframes and other launch infrastructure. Let other interested parties observe the nuclear decommissioning, if they want to, so that there’s no mistaking a future launch of those missiles as a nuclear one. And then use the missiles for non-nuclear Prompt Global Strike or a similar mission (e.g. non-nuclear FOBS, “rod from God” kinetic weapons, or whatever our hearts desire).

It ought to make everyone happy: it’s that many fewer fielded nuclear weapons in the world, it eliminates the most vulnerable part of the nuclear triad and moves us firmly away from LOW, it doesn’t take away any service branch’s sole nuclear capability (the Air Force would retain air-launched strategic capability, as a hedge against future developments making the SSBN force obsolete), and it would trade an expensive and not-especially-useful strategic capability for a much-more-useful tactical capability, and in the long term it could potentially allow the U.S. to draw down overseas-deployed personnel and vulnerable carrier strike groups while retaining rapid global reach.

It makes too much sense to ever actually occur, of course, at least not during an election season.

0 Comments, 0 Trackbacks

[/politics] permalink

Thu, 13 Oct 2016

At some point, Yahoo started sticking a really annoying popup on basically every single Flickr page, if you aren’t logged in with a Yahoo ID. Blocking these popups is reasonably straightforward with uBlock or ABP, but it took me slightly longer than it should have to figure it out.

As usual, here’s the tl;dr version. Add this to your uBlock “My filters”:

! Block annoying Flickr login popups
www.flickr.com##.show.mini-footer.signup-footer

That’s it. Note that this doesn’t really “block” anything, it’s a CSS hiding rule. For this to work you have to ensure that ‘Cosmetic Filters’ in uBlock / uBlock Origin is enabled.

The slightly-longer story as to why this took more than 10 seconds of my time, is because the default uBlock rule that’s created when you right-click on one of the popups and select ‘Block Element’ doesn’t work well. That’s because Yahoo is embedding a bunch of random characters in the CSS for each one, which changes on each page load. (It’s not clear to me whether this is designed expressly to defeat adblockers / popup blockers or not, but it certainly looks a bit like a blackhat tactic.)

Using the uBlock Origin GUI, you have to Ctrl-click (Cmd-click on a Mac) on the top element hiding rule in order to get a ‘genericized’ version of it that removes the full CSS path, and works across page reloads. I’d never dug into any of the advanced features of uBlock Origin before — it’s always just basically worked out of the box, insofar as I needed it to — so this feature was a nice discovery.

Why, exactly, Yahoo is shoving this annoying popup in front of content on virtually every Flickr page, to every non-logged-in viewer, isn’t clear, although we can certainly speculate: Yahoo is probably desperate at this point to get users to log in. Part of their value as a company hinges on the number of active users they can claim. So each person they hard-sell into logging in is some amount more they’ll probably get whenever somebody steps in and buys them.

As a longtime Flickr user, that end can’t come soon enough. It was always disappointing that Flickr sold out to Yahoo at all; somewhere out there, I believe there’s a slightly-less-shitty parallel universe where Google bought Flickr, and Yahoo bought YouTube, and Flickr’s bright and beautiful site culture was saved just as YouTube’s morass of vitrol and intolerance became Yahoo’s problem to moderate. Sadly, we do not live in that universe. (And, let’s be honest, Google would probably have killed off Flickr years ago, along with everything else in their Graveyard of Good Ideas. See also: Google Reader.)

Perhaps once Yahoo is finally sold and broken up for spare parts, someone will realize that Flickr still has some value and put some effort into it, aside from strip-mining it for logins as Yahoo appears to be doing. A man can dream, anyway.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Wed, 14 Sep 2016

Everyone’s favorite security analyst Bruce Schneier seems to think that somebody is learning how to “take down the Internet” by repeatedly probing key pieces of “basic infrastructure” — exactly what’s being probed isn’t stated, but the smart money is on the DNS root servers. Naturally, who is doing this is left unsaid as well, although Schneier does at least hazard the obvious guess at China and Russia.

If this is true, it’s a seemingly sharp escalation towards something that might legitimately be called ‘cyberwarfare’, as opposed to simply spying-using-computers, which is most of what gets lumped in under that label today. Though, it’s not clear exactly why a state-level actor would want to crash DNS; it’s arguably not really “taking down the Internet”, although it would mess up a lot of stuff for a while. Even if you took down the root DNS servers, it wouldn’t stop IP packets from being routed around (the IP network itself is pretty resilient), and operators could pretty quickly unplug their caching DNS resolvers and let them run independently, restoring service to their users. You could create a mess for a while, but it wouldn’t be crippling in the long term.

Except perhaps as one component of a full-spectrum, physical-world attack, it doesn’t make a ton of sense to disrupt a country’s DNS resolvers for a few hours. And Russia and China don’t seem likely to actually attack the U.S. anytime soon; relations with both countries seem to be getting worse over time, but they’re not shooting-war bad yet. So why do it?

The only reason that comes to mind is that it’s less ‘preparation’ than ‘demonstration’. It’s muscle flexing on somebody’s part, and not particularly subtle flexing at that. The intended recipient of the message being sent may not even be the U.S., but some third party: “see what we can do to the U.S., and imagine what we can do to you”.

Or perhaps the eventual goal is to cover for a physical-world attack, but not against the U.S. (where it would probably result in the near-instant nuclear annihilation of everyone concerned). Perhaps the idea is to use a network attack on the U.S. as a distraction, while something else happens in the real world? Grabbing eastern Ukraine, or Taiwan, just as ideas.

Though an attack on the DNS root servers would be inconvenient in the short run, I am not sure that in the long run that it would be the worst thing to happen to the network as an organism: DNS is a known weakness of the global Internet already, one that desperately needs a fix but where there’s not enough motivation to get everyone moving together. An attack would doubtless provide that motivation, and be a one-shot weapon in the process.

Update: This article from back in April, published by the ‘Internet Governance Project’, mentions a Chinese-backed effort to weaken US control over the root DNS, either by creating additional root servers or by potentially moving to a split root. So either the probing or a future actual disruption of DNS could be designed to further this agenda.

In 2014, [Paul] Vixie worked closely with the state-owned registry of China (CNNIC) to promote a new IETF standard that would allow the number of authoritative root servers to increase beyond the current limit of 13. As a matter of technical scalability, that may be a good idea. The problem is its linkage to a country that has long shown a more than passing interest in a sovereign Internet, and in modifying the DNS to help bring about sovereign control of the Internet. For many years, China has wanted its “own” root server. The proposal was not adopted by IETF, and its failure there seems to have prompted the formation and continued work of the YETI-DNS project.

The YETI-DNS project appears, at the moment, to be defunct. Still, China would seem to have the most to gain by making the current U.S.-based root DNS system seem fragile, given the stated goal of obtaining their own root servers.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 11 Sep 2016

If you can only bear to read one 9/11 retrospective or tribute piece this year, I’d humbly suggest — if you are not already familiar — reading the story of Rick Rescorla, one of the many heroes of the WTC evacuation.

The Real Heroes Are Dead, written by James B. Stewart in The New Yorker, from February 2002, is worth the read.

0 Comments, 0 Trackbacks

[/other] permalink

Fri, 09 Sep 2016

This was originally posted to Hacker News as a comment in a discussion about “microhousing”. The question I was responding to was:

What is NIMBY for microhousing based on?

This is an ongoing argument in Northern Virginia (which is not quite as expensive as SF / Seattle / NYC, but probably only one cost tier below that) over micro-housing, typically in the form of backyard apartments and the subdivision of single-family homes into boarding houses, and the major arguments are basically the same issues that apply to all “just build more housing, stupid” proposals.

Basically, if you suddenly build a lot more housing, you’d start to strain the infrastructure of the community in other ways. That strain is really, really unpleasant to other people who share the infrastructure, and so current residents — who are often already feeling like things are strained and getting worse over time — would rather avoid making things worse. The easiest way to avoid making things worse is just to control the number of residents, and the easiest way to do that is to control the amount of housing: If you don’t live here, you’re probably not using the infrastructure. QED.

In many ways, building more housing is the easiest problem to solve when it comes to urban infrastructure. Providing a heated place out of the rain just isn’t that hard, compared to (say) transportation or schools or figuring out economically sustainable economic balance.

Existing residents are probably (and reasonably) suspicious that once a bunch of tiny apartments are air-dropped in, and then a bunch of people move in to fill them up, that there won’t be any solution to any of the knock-on problems that will inevitably result — parking, traffic, school overcrowding, tax-base changes, stress to physical infrastructure like gas/water/sewer/electric systems — until those systems become untenably broken. I mean, I can’t speak to Seattle, but those things are already an increasingly-severe problem today, with the current number of residents, in my area, and people don’t have much faith in government’s ability to fix them; so the idea that the situation will be improved once everyone installs a couple of backyard apartments is ridiculous. (And then there are questions like: how are these backyard apartments going to be taxed? Are people who move in really going to pay more in taxes than they consume in services and infrastructure impact, or is this going to externalize costs via taxes on everyone else? There’s no clear answer to these questions, and people are reluctant to become the test case.)

If you want more housing, you need more infrastructure. If you want more infrastructure, either you need a different funding model or you need better government and more trust in that government. Our government is largely (perceived to be) broken, and public infrastructure is (perceived to be) broken or breaking, and so the unsurprising result is that nobody wants to build more housing and add more strain to a system that’s well beyond its design capacity anyway.

That’s why there’s so much opposition to new housing construction, particularly to ideas that look just at ways to provide more housing without doing anything else. You’re always going to get a lot of opposition to “just build housing” proposals unless they’re part of a compelling plan to actually build a community around that new housing.

0 Comments, 0 Trackbacks

[/politics] permalink

Fri, 26 Aug 2016

Bruce Schneier has a new article about the NSA’s basically-all-but-confirmed stash of ‘zero day’ vulnerabilities on his blog, and it’s very solid, in typical Bruce Schneier fashion.

The NSA Is Hoarding Vulnerabilities

I won’t really try to recap it here, because it’s already about as concise as one can be about the issue. However, there is one thing in his article that I find myself mulling over, which is his suggestion that we should break up the NSA:

And as long as I’m dreaming, we really need to separate our nation’s intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency’s mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS’s mission.

Far be it from me to second-guess Schneier on most topics, but that just doesn’t seem to make a whole lot of sense. If the key problem is that vulnerabilities are being hoarded for offensive use rather than being shared with manufacturers (defensive use), it doesn’t seem like splitting those two missions into separate agencies is going to improve things. And the predictable result is that we’re then going to have two separate agencies working against one another, doing essentially the same research, looking for the same underlying vulnerabilities, for different aims. That seems… somewhat inefficient.

And if history is any guide, the U.S. will probably spend more on offensive armaments than on defense. Contrary to the Department of Defense’s name, since the end of WWII we have based our national-defense posture largely on a policy of force projection and deterrence-through-force, and I am highly skeptical that, as a nation, we’re going to suddenly take a different tack when it comes to “cyberwarfare” / IT security. The tension between offense and defense isn’t unique to IT: it exists in lots of other places, from ICBMs to vehicle armor, and in most cases U.S. doctrine emphasizes the offensive, force-projective capability. This is practically a defining element of U.S. strategic doctrine over the past 60 years.

So the net result of Schneier’s proposal would probably be to take the gloves off the NSA: relieve it of the defensive mission completely, giving it to DHS — which hardly seems capable of taking on a robust cyberdefense role, but let’s ignore that for the sake of polite discussion — but almost certainly emerge with its funding and offensive role intact. (Or even if there was a temporary shift in funding, since our national adversaries have, and apparently make use of, offensive cyberwarfare capabilities, it would only be a matter of time until we felt a ‘cyber gap’ and turned on the funding tap again.) This doesn’t seem like a net win from a defense standpoint.

I’ll go further, admittedly speculation: I suspect that the package of vulnerabilities (dating from 2013) that are currently being “auctioned” by the group calling themselves the Shadow Brokers probably owe their nondisclosure to some form of internal firewalling within NSA as an organization. That is to say, the sort of offensive/defensive separation that Schneier is seemingly proposing at a national level probably exists within NSA already and is related to why the zero-day vulnerabilities weren’t disclosed. We’ll probably never know for sure, but it wouldn’t surprise me if someone was hoarding the vulnerabilities within or for a particular team or group, perhaps in order to prevent them from being subject to an “equities review” process that might decide they were better off being disclosed.

What we need is more communication, not less, and we need to make the communication flow in a direction that leads to public disclosure and vulnerability remediation in a timely fashion, while also realistically acknowledging the demand for offensive capacity. Splitting up the NSA wouldn’t help that.

However, in the spirit of “modest proposals”, a change in leadership structure might: currently, the Director of the NSA is also the Commander of the U.S. Cyber Command and Chief of the Central Security Service. It’s not necessarily clear to me that having all those roles, two-thirds of which are military and thus tend to lean ‘offensive’ rather than ‘defensive’, reside in the same person is ideal, and perhaps some thought should be given to having the NSA Director come from outside the military, if the goal is to push the offensive/defensive pendulum back in the opposite direction.

0 Comments, 0 Trackbacks

[/politics] permalink

The other day I discovered an interesting Python behavior that I somehow had managed not to hit before — in fairness, I use Python mostly for scripting and automation, not ‘real’ software development, but I still thought I understood the basics reasonably well.

Can you spot the problem? The following is designed to remove words from a list if they are below a certain number of characters, specified by args.minlength:

for w in words:
    if len(w) < int(args.minlength):
        words.remove(w)

The impending misbehavior, if you didn’t catch it by this point, is not necessarily obvious. It won’t barf an error at you, and you can actually get it to pass a trivial test, depending on how the test data is configured. But on a real dataset, you’ll end up with lots of words shorter than args.minlength left in words after you (thought) you iterated through and cleaned them!

(If you want to play with this on your own, try running the above loop against the contents of your personal iSpell dictionary — typically ~/.ispell_english on Unix/Linux — or some other word list. The defect will quickly become apparent.)

A good description to the problem, along with several solutions, is of course found on Stack Overflow. But to save you the click: the problem is iterating over a mutable object, such as a list, and then modifying the list (e.g. by removing items) inside the loop. Per the Python docs, you shouldn’t do that:

If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy.

The solution is easy:

for w in words[:]:
    if len(w) < int(args.minlength):
        words.remove(w)

Adding the slice notation causes Python to iterate over a copy of the list (pre-modification), which is what you actually want most of the time, and then you’re free to modify the actual list all you want from inside the loop. There are lots of other possible solutions if you don’t like the slice notation, but that one seems pretty elegant (and it’s what’s recommended in the Python docs so it’s presumably what someone else reading your code ought to expect).

I’d seen the for item in list[:]: construct in sample code before, but the exact nature of the bugs it prevents hadn’t been clear to me before. Perhaps this will be enlightening to someone else as well.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Tue, 23 Aug 2016

About fifty pages into John Bruce Medaris’s 1960 autobiography Countdown for Decision, there is an unsourced quote attributed to Col. C.G. Patterson, who in 1944 was in charge of Anti-Aircraft Artillery for the U.S. First Army, outlining the concept of a “technological casualty”:

“If a weapon costs more to build, in money, materials, and manpower, than it costs the enemy to repair the damage the weapon causes, the user has suffered a technological casualty. In any long-drawn-out struggle this might be the margin between victory and defeat.” 1

As far as I can tell, the term “technological casualty” never passed into general usage with that meaning, which is unfortunate. And although sources do confirm that Col. Patterson existed and by all accounts served admirably as the commander of air defense artillery for First Army in 1944, there doesn’t appear to be much record outside of Medaris’ book of the quote. Still, credit where it is most likely due; if ever a shorthand name for this idea is required, I might humbly suggest “Patterson’s Dictum”. (It also sounds good.)

I suspect, given Patterson’s role at the time, that the original context of the quote had to do with offensive or defensive air capability. Perhaps it referred to the attrition of German capability that was at that point ongoing. In Countdown, Medaris discusses it in the context of the V-2, which probably consumed more German war resources to create than they destroyed of Allied ones. But it is certainly applicable more broadly.

On its face, Patterson’s statement assumes a sort of attritative, clash-of-civilizations, total-commitment warfare, where all available resources of one side are stacked against all available resources of the other. One might contend that it doesn’t seem to have much applicability in the age of asymmetric warfare, now that we have a variety of examples of conflicts ending in a victory — in the classic Clausewitzian political sense — by parties who never possessed any sort of absolute advantage in money, materials, or manpower.

But I would counter that even in the case of a modern asymmetric war, or realpolitik-fueled ‘brushfire’ conflicts with limited aims, the fundamental calculus of war still exists, it just isn’t as straightforward. Beneath all the additional terms that get added to the equation is the essential fact that defeat is always possible if victory proves too expensive. Limited war doesn’t require you outspend your adversary’s entire society, only their ‘conflict budget’: their willingness to expend resources in that particular conflict.

Which makes Patterson’s point quite significant: if a modern weapons system can’t subtract as much from an adversary’s ‘conflict budget’ — either through actual destructive power, deterrence, or some other effect — as it subtracts from ours in order to field it (including the risk of loss), then it is essentially a casualty before it ever arrives.

1: Countdown for Decision (1960 ed.), page 51.

0 Comments, 0 Trackbacks

[/politics] permalink

Mon, 22 Aug 2016

Ars Technica has a nice article, published earlier this month, on the short life of the Digital Compact Cassette format, one of several attempts to replace the venerable analog cassette tape with a digital version, prior to its eventual demise in the download era.

At risk of dating myself, I remember the (very brief) rise and (anticlimactic) fall of the Digital Compact Cassette, although I was a bit poor to be in the target market of early adopters and hi-fi-philes that the first decks were targeted to. And while the Ars article is decent, it ignores the elephant in the room that contributed mightily to DCC’s demise: DRM.

DCC was burdened by a DRM system called SCMS, also present in the consumer version of DAT. This inclusion was not the fault of Philips or Matsushita (later Panasonic), who designed DCC, but a result of an odious RIAA-backed law passed in 1992, the Audio Home Recording Act, which mandated it in all “digital audio recording device[s]”.

It is telling that of the variety of formats which were encumbered by SCMS, exactly zero of them have ever succeeded in the marketplace in a way that threatened the dominant formats. The AHRA was (and remains, de jure, because it’s still out there on the books, a piece of legal “unexploded ordnance” waiting for someone to step on it) the RIAA’s most potent and successful weapon in terms of suppressing technological advancement and maintaining the status quo throughout the 1990s.

Had it not been for the AHRA and SCMS, I think it’s likely that US consumers might have had not one but two alternative formats for digital music besides the CD, and perhaps three: consumer DAT, DCC, and MiniDisc. Of these, DAT is probably the best format from a pure-technology perspective — it squeezes more data into a smaller physical space than the other two, eliminating the need for lossy audio compression — but DAT decks are mechanically complex, owing to their helical scan system, and the smallest portable DATs never got down to Walkman size. DCC, on the other hand, used a more robust linear tape system, and perhaps most importantly it was compatible with analog cassette tapes. I think there is a very good chance that it could have won the battle, if the combatants had been given a chance to take the field.

But the AHRA and SCMS scheme conspired to make both consumer-grade DAT and DCC unappealing. Unlike today, where users have been slowly conditioned to accept that their devices will oppose them at every opportunity in the service of corporations and their revenue streams, audio enthusiasts from the analog era were understandably hostile to the idea that their gear might stop them from doing something it was otherwise quite physically capable of doing, like dubbing from one tape to another, or from a CD to a tape, in the digital domain. And a tax on blank media just made the price premium for digital, as opposed to analog, that much higher. If you are only allowed to make a single generation of copies due to SCMS, and if you’re going to pay extra for the digital media due to the AHRA, why not just get a nice analog deck with Dolby C or DBX Type 2 noise reduction, and spend the savings on a boatload of high-quality Type IV metal cassettes?

That was the question that I remember asking myself anyway, at the time. I never ended up buying a DCC deck, and like most of the world continued listening to LPs, CDs, and analog cassettes right up until cheap computer-based CD-Rs and then MP3 files dragged the world of recorded music fully into the digital age, and out of the shadow of the AHRA.

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 16 Aug 2016

Bloomberg’s Matt Levine has a great article, published today, which begins with a discussion of the apparently-hollow shell company “Neuromama” (OTC: NERO), which — cue shocked face — is probably not in reality a $35 billion USD company, but quickly moves into a delightful discussion of insider trading, money market rates, an “underpants gnomes”-worthy business plan, and the dysfunction of the Commodity Futures Trading Commission. There’s even a bonus mention of Uber shares trading on the secondary market, which is something I’ve written about before. Definitely worth a read:

Heavy Ion Fusion and Insider Trading

If you only read one section of it, the part on “When is insider trading a crime?” is, in my humble opinion, probably the best. (Memo to self: next time there’s a big insider-trading scandal, be sure to come back to this.) But really, it’s a good article. Okay, there’s a bit too much gloating about those stupid regulators and their stupid regulations for someone who isn’t a hedge fund manager to get excited about, but it’s fucking Bloomberg, that’s probably a contractual obligation to get printed there. Also it’s Congress’ fault anyway, as usual.

0 Comments, 0 Trackbacks

[/finance] permalink

Mon, 15 Aug 2016

Very cool open-source project VeraCrypt is all over the news this week, it seems. First when they announced that they were going to perform a formal third-party code audit, and had come up with the funds to pay for it; and then today when they claimed their emails were being intercepted by a “nation-state” level actor.

The audit is great news, and once it’s complete I think we’ll have even more confidence in VeraCrypt as a successor to TrueCrypt (which suffered from a bizarre developer meltdown1 back in 2014).

The case of the missing messages

However, I’m a bit skeptical about the email-interception claim, at least based on the evidence put forward so far. It may be the case — and, let’s face it, should be assumed — that their email really is being intercepted by someone, probably multiple someones. Frankly, if you’re doing security research on a “dual use” tool2 like TrueCrypt and don’t think that your email is being intercepted and analyzed, you’re not participating in the same consensus reality as the rest of us. So, not totally surprising on the whole. Entirely believable.

What is weird, though, is that the evidence for the interception is that some messages have mysteriously disappeared in transit.

That doesn’t really make sense. It doesn’t really make sense from the standpoint of the mysterious nation-state-level interceptor, because making the messages disappear tips your hand, and it also isn’t really consistent with how most modern man-in-the-middle style attacks work. Most MITM attacks require that the attacker be in the middle, that is, talking to both ends of the connection and passing information. You can’t successfully do most TLS-based attacks otherwise. If you’re sophisticated enough to do most of those attacks, you’re already in a position to pass the message through, so why not do it?

There’s no reason not to just pass the message along, and that plus Occam’s Razor is why I think the mysteriously disappearing messages aren’t a symptom of spying at all. I think there’s a much more prosaic explanation. Which is not to say that their email isn’t being intercepted. It probably is. But I don’t think the missing messages are necessarily a smoking gun displaying a nation-state’s interest.

Another explanation

An alternative, if more boring, explanation to why some messages aren’t going through has to do with how Gmail handles outgoing email. Most non-Gmail mailhosts have entirely separate servers for incoming and outgoing mail. Outgoing mail goes through SMTP servers, while incoming mail is routed to IMAP (or sometimes POP) servers. The messages users see when looking at their mail client (MUA) are all stored on the incoming server. This includes, most critically, the content of the “Sent” folder.

In order to show you messages that you’ve sent, the default configuration of many MUAs, including Mutt and older versions of Apple Mail and Microsoft Outlook, is to save a copy of the outgoing message in the IMAP server’s “Sent” folder at the same time that it’s sent to the SMTP server for transmission to the recipient.

This is a reasonable default for most ISPs, but not for Gmail. Google handles outgoing messages a bit differently, and their SMTP servers have more-than-average intelligence for an outgoing mail server. If you’re a Gmail user and you send your outgoing mail using a Gmail SMTP server, the SMTP server will automatically communicate with the IMAP server and put a copy of the outgoing message into your “Sent” folder. Pretty neat, actually. (A nice effect of this is that you get a lot more headers on your sent messages than you’d get by doing the save-to-IMAP route.)

So as a result of Gmail’s behavior, virtually all Gmail users have their MUAs configured not to save copies of outgoing messages via IMAP, and depend on the SMTP server to do it instead. This avoids duplicate messages ending up in the “Sent” folder, a common problem with older MUAs.

This is all fine, but it does have one odd effect: if your MUA is configured to use Gmail’s SMTP servers and then you suddenly use a different, non-Google SMTP server for some reason, you won’t get the sent messages in your “Sent” box anymore. All it takes is an intermittent connectivity problem to Google’s servers, causing the MUA to fail over to a different SMTP server (maybe an old ISP SMTP or some other configuration), and messages won’t show up anymore. And if the SMTP server it rolls over to isn’t correctly configured, messages might just get silently dropped.

I know this, because it’s happened to me: I have Gmail’s SMTP servers configured as primary, but also have my ISPs SMTP set up in my MUA, because I have to use them for some other email accounts that don’t come with a non-port-25 SMTP server (and my ISP helpfully blocks outgoing connections on port 25). It’s probably not an uncommon configuration at all.

Absent some other evidence that the missing messages are being caused by a particular attack (and it’d have to be a fairly blunt one, which makes me think someone less competent than nation-state actors), I think it’s easier to chalk the behavior up to misconfiguration than to enemy action.

Ultimately though, it doesn’t really matter, because everyone ought to be acting as though their messages are going to be intercepted as they go over the wire anyway. The Internet is a public network: by definition, there’s no security guarantees in transit. If you want to prevent snooping, the only solution is end-to-end crypto combined with good endpoint hygiene.

Here’s wishing all the best to the VeraCrypt team as they work towards the code audit.

1: Those looking for more information on the TrueCrypt debacle can refer to this Register article or this MetaFilter discussion, both from mid-2014. This 2015 report may also be of interest. But as far as I know, the details of what happened to the developers to prompt the project’s digital self-immolation are still unknown and speculation abounds about the security of the original TrueCrypt.

2: “Dual use” in the sense that it is made available for use by anyone, and can be therefore used for both legitimate/legal and illegitimate/illegal purposes. I think it goes almost without saying that most people in the open-source development community accept the use of their software by bad actors as simply a cost of doing business and a reasonable trade-off for freedom, but this is clearly not an attitude that is universally shared by governments.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Fri, 12 Aug 2016

The work I’ve been doing with Tvheadend to record and time-shift ATSC broadcast television got me thinking about my pile of old NTSC tuner cards, leftover from my MythTV system designed for recording analog cable TV. These NTSC cards aren’t worth much, now that both OTA broadcast and most cable systems have shifted completely over to ATSC and QAM digital modulation schemes, except in one regard: they ought to be able to still receive FM broadcasts.

Since the audio component of NTSC TV transmissions is basically just FM, and the NTSC TV bands completely surround the FM broadcast band on both sides, any analog TV reciever should have the ability to receive FM audio as well — at least in mono (FM stereo and NTSC stereo were implemented differently, the latter with a system called MTS). But of course whether this is actually possible depends on the tuner card’s implementation.

I haven’t plugged in one of my old Hauppage PCI tuner cards yet, although they may not work because they contain an onboard MPEG-2 hardware encoder — a feature I paid dearly for, a decade ago, because it reduces the demand on the host system’s processor for video encoding significantly — and it wouldn’t surprise me if the encoder failed to work on an audio-only signal. My guess is that the newer cards which basically just grab a chunk of spectrum and digitize it, leaving all (or most) of the demodulation to the host computer, will be a lot more useful.

I’m not the first person to think that having a ‘TiVo for radio’ would be a neat idea, although Googling for anything in that vein gets you a lot of resources devoted to recording Internet “radio” streams (which I hate referring to as “radio” at all). There have even been dedicated hardware gadgets sold from time to time, designed to allow FM radio timeshifting and archiving.

  • Linux based Radio Timeshifting is a very nice article, written back in 2003, by Yan-Fa Li. Some of the information in it is dated now, and of course modern hardware doesn’t even break a sweat doing MP3 encoding in real time. But it’s still a decent overview of the problem.
  • This Slashdot article on radio timeshifting, also from 2003 (why was 2003 such a high-water-mark for interest in radio recording?), still has some useful information in it as well.
  • The /drivers/media/radio tree in the Linux kernel contains drivers for various varieties of FM tuners. Some of the supported devices are quite old (hello, ISA bus!) while some of them are reasonably new and not hard to find on eBay.

Since I have both a bunch of old WinTV PCI cards and a newer RTL2832U SDR dongle, I’m going to try to investigate both approaches: seeing if I can use the NTSC tuner as an over-engineered FM reciever, and if that fails maybe I’ll play around with RTL-SDR and see if I can get that to receive FM broadcast at reasonable quality.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Thu, 11 Aug 2016

Echoing the theme of an article I read yesterday, about the FCC’s intentional — or at best negligent — duopoly in wired broadband, is this article about the current “5G” hype, and how it seems to be assisting the big telcos in disguising their under-investment in FTTH / FTTP in favor of more-profitable wireless services:

The Next Generation of Wireless — “5G” — Is All Hype

The author writes:

Cynics might point out that by waving their hands around about the coming miracle of 5G — even though its arrival is really a long way off — carriers are directing attention away from the terrible state of fiber last-mile infrastructure in the US. Call me one of those cynics. This kind of misleading tactic isn’t difficult to pull off in the U.S. […] A leading tech VC in New York, someone who is viewed as a thought leader, said to me not long ago, “Why do you keep talking about fiber? Everything’s going wireless.”

This is eerily similar to claims used by the telco and cablecos to justify diminished regulation, by pointing to BPL. The major justification for eliminating ‘unbundling’ regulation, and for not applying it to cable lines at all, was because consumers were going to be able to obtain Internet service over a variety of last-mile circuits, including cable lines, telephone lines, fiber, and power wiring. This, of course, was horseshit — BPL was always a terrible idea — but it was just plausible-enough to keep the regulators at bay while the market condensed into a duopoly.

Given that the telecommunications companies want nothing other than to extract maximum economic rents from consumers for as long as they can, while investing as little as they possibly can for the privilege — this is how corporations work, of course, so we shouldn’t be especially surprised — we should treat the 5G hype with suspicion.

No currently-foreseeable wireless technology is going to reduce the need for high-bandwidth (read: fiber-optic) backhaul; 5G as envisioned by most rational people would, in fact, vastly increase the demand for backhaul and the need for FTTH/FTTP. Be on guard for anyone who suggests that 5G will make investments in fiber projects — especially muni fiber — unnecessary, as they are almost certainly trying to sell you something, and probably nothing you want to buy.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

About a decade ago, I spent a slightly-absurd amount of time building a MythTV system for my house. It was pretty awesome, for the time: basically a multi-player distributed DVR. It could record 3 programs off of cable TV simultaneously, while also letting up to 3 people play back recordings on different TVs in the house.

It lasted up until we moved, and I didn’t have the time to get everything hooked up and working again. By that point commercial streaming services had started to take off, digital cable had reduced the amount of programming that you could easily record with an inexpensive NTSC tuner card, and cable TV prices had crept up to the point where I was looking for a way to watch less cable TV rather than more. We made the switch to an all-IP TV system (Netflix, Hulu, Amazon Prime) a while later, and never really looked back.

But recently, spurred mostly by a desire to watch the Olympics live — a desire left largely unfulfilled by NBC; thanks, guys — we got an antenna and hit the ‘auto scan’ button. The number of free over-the-air (OTA) ATSC TV stations we could receive was a pleasant surprise, especially someone who grew up in a rural area and still thought of ‘broadcast TV’ as a haze of static on a good day.

Knowing that there were 30+ free OTA channels (when you count digital subchannels) available in my house, for the cost of only an antenna, got me looking back at the state of Linux-based PVR and timeshifting software.

MythTV, of course, is still around. But if my past experience is any guide, it’s not something you just casually set up. Also, it still seems to be designed with the idea of a dedicated HTPC in mind: basically, to use it, you’ll want a Linux PC running MythTV connected directly to your TV. The MythTV clients for STBs like the Roku or Amazon Prime stick seem pretty immature, as does the Plex plugin. Although I may end up coming back to it, I really didn’t feel like going back down the MythTV road if there were lighter-weight options for recording OTA TV and serving it up.

Enter Tvheadend, which seems to be a more streamlined approach. Rather than offering an entire client/server solution for DVRing, content management, and HTPC viewing, it’s just the DVR and, to a lesser extent, content management. The idea is that you set up and schedule recordings via a web interface to your server, and then the server makes those recordings available via DLNA to streaming devices on the network.

It’s in no way a complete replacement for MythTV, but it seemed to talk to my HDHomeRun (the original two-tuner ATSC/QAM model, not one of the newer ones with built-in DLNA) with very little configuration at all. So far, I’ve got it working to the point where I can watch live TV using VLC on a client machine in the house, and I’m just now starting to work on getting an Electronic Program Guide (EPG) set up, a bit tricky in the US due to the different format used for OTA metadata vs. in Europe, where most of the project’s developers seem to be located.

Anyway, I’m glad to see that MythTV is still apparently going strong, and if I had more time I’d definitely love to cobble together a DIY all-IP home video distribution and centralized PVR system again. But given limited time for side projects these days, Tvheadend seems to fit the bill for a lighter-weight OTA network recorder.

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 10 Aug 2016

“America’s Intentional Broadband Duopoly” by Dane Jasper, writing on the blog of Sonic.net Inc., an ambitious Gigabit ISP, is one of the best summaries of why US broadband is the way it is that I’ve read. If you live in the US and use the Internet, it’s worth reading, just to understand why your Internet access options suck so damn badly compared to the rest of the civilized world.

Spoiler Alert: It is not, as telco / cableco apologists sometimes attest, a function of geography or population density — there are ample examples of countries with both more challenging geography or less-dense populations with far better, and cheaper, Internet service. (And the population density is really a red herring when you realize that most of the US population lives in areas that are pretty dense, like the Eastern Seaboard, which is comparable to Europe.) The answer is a sad combination of political lobbying, regulatory capture, and technological false promises.

In case their site goes down at some point in the future, here’s a link to the Internet Archive’s cached version.

Via MetaFilter.

0 Comments, 0 Trackbacks

[/politics] permalink

Tue, 09 Aug 2016

All the way back in 2008 — you remember 2008, right? Back when oil hit $100/barrel for the first time, and a whole bunch of Americans thought Russia had invaded the Peach State, and who can forget the International Year of the Potato? — two days after the election, I wrote the following:

About the only positive aspect of [the Democrats’ victory] that I can find, is that it might represent the death knell of the far-right, authoritarian “conservatives” that have monopolized the GOP brand for too long. […] The far-right just isn’t socially mainstream enough to form the core of a majority political party.

I stand by that statement, by the way, even in the face of Trump; what Trump shows is that a dedicated, passionate minority can get a basically-unelectble candidate all the way to the general election.

But it’s disheartening that the lesson the Republican establishment learned from 2008 wasn’t “don’t let the inmates run the asylum”, but instead was, seemingly, “don’t pick Sarah Palin as a running mate.” (To their credit, nobody has repeated that particular mistake as far as I know.)

As Trump slides towards a 10-point gap behind Clinton, and has almost certainly alienated blue-collar white voters in key swing states like Pennsylvania with his anti-military rants, it will be interesting to see whether the GOP as a party finally learns a more general lesson about the disconnect between primary voters and the rest of the country, or if — like the aftermath of 2008 — they manage only to add one more mistake to the long list of things they won’t do again.

0 Comments, 0 Trackbacks

[/politics] permalink

Fri, 05 Aug 2016

It seems that the shine has started to wear off of the “giant unicorns”, the largest non-public tech startups with valuations over $1 billion USD. To the point where people are starting to wonder how, exactly, they could short Uber, that unicorn-among-unicorns (perhaps an ‘ubercorn’?).

It’s almost as if people are waking up to the idea that a company that doesn’t own any meaningful capital assets and whose success depends on an easily-duplicated strategy and a mobile app, and whose most recent business innovation is to get into sub-prime vehicle leasing, might not be worth more than BMW, Ford, or General Motors.

That’s not to say that Uber, or ride-sharing generally, is doomed. But the $62.5 billion USD present valuation seems absurd, and there are a significant number of flaming hoops that the company has to successfully jump through in order for common-share investors to get paid out at that level.

The $62.5B number implicitly assumes not just that Uber will continue to be successful as an urban ride-sharing taxi alternative, but that it will be an agent of radical, transformational change in global personal transport. Specifically, it seems to require that the dominant (and admittedly inefficient) model of personal automobile ownership pioneered in the US in the 20th century will collapse, and be replaced with fleets of time-shared robotic cars. Nothing else short of that would result in the $60+ billion valuation.

Taking a bet on autonomous vehicles is one thing, but putting all your chips on the assumption that the public will suddenly abandon its love affair with cars and begin behaving like rational economic actors is quite another.

Reading between the lines, it would seem that Uber’s leadership probably agrees at some level, and that’s why they’re so reluctant to IPO. If they were to go public today, their market cap would probably not be nearly the $60B figure, and individual employees and early-round investors would essentially wiped out due to late-round funding terms. So they’ve chosen to delay the IPO as long as they can, perhaps in the hope that all the long-shot bets will pay off by then. It’s a big gamble.

Uber, by prohibiting secondary sales of its pre-IPO shares, essentially prohibits straightforward short positions, making it a “one-way bet”: you can bet that they’ll succeed, but you can’t bet that they’ll fail — all you can do is not play. Since I don’t take short positions as a rule this doesn’t bother me, but it does further suggest that their valuation is somewhat bogus, and they know it.

My guess is that Uber isn’t going anywhere, but there’s going to be some very serious retrenchment in both their ambitions and in their total valuation in the next few years. I wouldn’t go out of my way to achieve a short position against them, but the lengths to which investors are going to get a piece of their action seems like a classic irrationally-exuberant bubble market.

0 Comments, 0 Trackbacks

[/finance] permalink

Thu, 04 Aug 2016

Apparently I’m one of the few people still using Blosxom to run their blog, the rest of the world having moved on to shinier solutions in the intervening decade or so, but I’m a curmudgeon who hates change. So here we are.

Quite a few of the sites that used to host various Blosxom plugins have gone offline, and finding new modules and their documentation is becoming challenging. Most of them aren’t being actively maintained, either.

To fix a few problems that I’ve run into myself, I created GitHub projects for two plugins, in order to make them more accessible and also perhaps encourage other people besides the original maintainers to work on them and contribute fixes. It seems like a low-effort way to keep the platform as a whole a bit more healthy than it would otherwise be.

  • “Feedback” plugin, originally by Frank Hecker.
    Used to provide the feedback / comment form at the bottom of each article page, which are moderated via email messages sent to the site owner for approval. I have modified the latest ‘master’ version in order to prevent errors with newer versions of Perl; these changes are currently in the ‘dev’ branch, and feedback (ha) on them is welcome.
  • “Calendar” plugin, originally by Todd Larason.
    Provides the small calendar in the site’s navigation bar, allowing access to past articles by month and year.

In addition, there are some large collections of Blosxom plugins on Github; the biggest is maintained by Blosxom Fanatics in the Plugins repository. However, the repo only has a single commit, and seems to be more of a historical archive than a basis for continued development.

0 Comments, 0 Trackbacks

[/meta] permalink

It looks like the honeymoon, if there ever really was one, is over for Candidate Trump, and people are seriously starting to consider whether it would be better for the Republican party if he just lost the election.

Writing in The Guardian, Katrina Jorgensen spells it out:

[F]or the party to come back strong after Donald Trump’s divisive candidacy […] the least-worst option is a major loss in the presidential race.

The key word here is “major”. Intentionally or not, Trump has signaled with his ‘rigged election’ comments that a narrow loss wouldn’t necessarily be a clear sign to sit down and shut up.

If Trump only trails [Clinton] by a few points, you can bet he will blame the Republicans who voted their conscience. Or he’ll kick up dirt over the “rigged” system, as he has already alluded to. Trump supporters in the party will go on a witch-hunt […] Only a loss by a wide margin would send a clear message to the Republican party: this is the wrong choice for America.

Basically, Republicans need to cordon off Trump from the rest of the party and in particular from down-ballot Senate elections. Barring an unexpected retreat by Trump himself, which seems unlikely, the Presidency is essentially a lost cause — but the House and Senate are not. Trump’s increasingly bizarre behavior may actually help differentiate other candidates from him, and make it more difficult for Democrats to use him as leverage, because he is simply that clearly divorced from the rest of the party’s mainstream candidates.

Then, the party needs to give some serious thought to its primary system. Ironically, it wouldn’t be surprising if the Republicans end up with the same sort of superdelegate-heavy system that the Democrats implemented, and which basically doomed the Sanders campaign in favor of the safe (but unpopular) Clinton in their own primary this year. So the strategy is certainly not without risks. But the general election, if it led to a lopsided Republican defeat by Clinton, would show that the failure mode of the superdelegate-heavy, establishment-driven primary system is preferable to the failure mode of the populist-driven system the Republicans currently use.

As Paul Ryan said earlier today, “[Republicans] are a grass-roots party; we aren’t a superdelegate party.” One can only wonder if perhaps he’s wishing that wasn’t the case.

0 Comments, 0 Trackbacks

[/politics] permalink

Wed, 03 Aug 2016

Years ago I came across a piece by a journalist named Alex Steffen called Night, Hoover Dam. It summed up a lot of feelings that I had about the “survivalist fallacy”, to the point where I even wrote a blog post about it back in May 2008.

It was originally hosted on a site called ‘Worldchanging.org’, an environmental website which apparently got acquired and subsequently killed in 2011. This is a shame, because there was a lot of good content there, and I can’t imagine it would have cost them much to keep it going. But thankfully, we have the Internet Archive, and so the piece itself wasn’t lost for good.

Here’s the archived version: https://web.archive.org/web/20160111223335/http://www.worldchanging.com/archives/001413.html

It’s still worth a read.

0 Comments, 0 Trackbacks

[/politics] permalink

Wed, 27 Jul 2016

Another choice quote from Herb York’s Race to Oblivion:

Herman Kahn in his book On Thermonuclear War, written in 1959 when the rate of breakthroughs seemed to be still rising, ma[de] a whole set of extrapolations which turned out to be false. He predicted then that by 1969 we would probably have “cheap simple [nuclear] bombs,’ “cheap simple missiles,” controlled thermonuclear power, “Californium bullets” (by which he meant A-bombs very much smaller than any we now have), and a superior substitute for radar. He said we would be able to put payloads in orbit for only ten dollars a pound. He predicted that by 1973 we would be working on supersonic bombers and supersonic fighters two generations beyond the B-70 and the F-108 and that there would be manned offensive satellites and manned defensive satellites in orbit. Every one of these errors in prediction arose out of the twin false assumptions that the immediate past was typical and that the technological future could be predicted by simple extrapolation. These errors are also illustrative of what happens when analysts use sophisticated methods but poor or nonsensical inputs: the final result cannot be better than the inputs no matter how fancily they may be processed.

(From Chapter 8, “The McNamara Era”, page 158.)

This is a particular military example of what I like to call the ‘Flying Car Problem’, which is the tendency for predictions of the future to massively overestimate gains in one area while completely ignoring others, due to a reliance on flawed straight-line approximations of technological progress. The result is a feeling of vague disappointment when those predictions don’t come to pass: “it’s 2016 and I have an iPhone… but where’s my flying car?”

It’s worth noting that York was writing in 1970, about predictions made only 11 years prior and which were even by that time clearly ridiculous (and may have been ridiculous when they were written, though I’ll give Kahn the benefit of the doubt); from our vantage point here in 2016 we can see even more clearly that York was correct.

The difference between the mid/late 50s, when Kahn was writing, and the late 60s, when York was writing the first edition of Race to Oblivion, represents the second inflection point on a sigmoid-shaped curve, where the rate of change of technology (in this case, nuclear technology), suddenly changed back from an exponential to a much slower linear regime. The late 50s were in the thick of the exponential phase, and nobody knew at the time exactly how long it would continue.

I don’t think it’s a terrible stretch to say that the 50s and perhaps early 60s were analogous, in terms of nuclear and space technology, to the 80s and early 90s with personal computers. And Kahn’s predictions of pocket nukes and $10/lb orbital delivery are analogous to a host of breathless (but perhaps somewhat less apocalyptic) predictions of the PC revolution toppling nation-states and bringing about a new technocratic/individualist world order. As someone who lived through the PC and Internet revolutions but was born too late for the excitement of the Jet Age, these comparisons help put the “too cheap to meter” predictions of the past in perspective. They were wrong, sure, but they weren’t any more wrong than the turn-of-the-21st-century Internet utopians. It’s very, very hard to judge when that second inflection point — the one that brings you out of the exponential and back into the slow, linear curve of technological development — when you are in the midst of exponential year-over-year change. It’s only obvious after the fact.

And like York in 1970, we are now realizing that the future probably isn’t going to be quite as radically different as we thought it was going to be, even though there’s still a lot of technical work to be done. It’s just that the low-hanging fruit has mostly been picked; what’s left is the slow, expensive grind of incremental engineering, which generally doesn’t lead to what we perceive as sweeping social or lifestyle changes. (Though I’d argue that in fact it does result in just as much progress; it just doesn’t feel the same.)

At some point in the future I’ll probably devote a separate post to it, but Bruce Schneier’s new article Power in the Age of the Feudal Internet deals with this at length, as it applies to the Internet and networked society. As we come, ‘unevenly distributed’, over the peak at the top of the exponential-growth phase, many predictions of what the future will look like will have to be reevaluated.

0 Comments, 0 Trackbacks

[/technology] permalink

I noticed that on several Dell laptops that I’ve upgraded to Debian 8 ‘Jessie’ with the XFCE desktop environment, that the keyboard mute button had stopped working. Or rather, the button mutes the audio just fine, but pressing it again doesn’t actually unmute again. To get audio back, you have to manually invoke alsamixer and unmute from there.

After more searching than it seemed this problem ought to require, I found this StackExchange answer, which references a post on Rony Lutsky’s blog which gave me the solution.

It turns out that the fix is remarkably simple:

sudo apt-get install gstreamer0.10-pulseaudio

You can then, if you want, verify that it worked by running xfconf-query -lc xfce4-mixer before and after installing gstreamer0.10, but this isn’t a key part of the process.

From what I can tell, the issue is a missing dependency in one of the XFCE audio packages, but I’m damned if I know which one exactly.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Tue, 26 Jul 2016

Here’s an interesting bit of Space Race trivia that I’d never heard before, from Herbert F. York’s book “Race to Oblivion” (mentioned previously):

The third pre-Sputnik satellite program was bootlegged by the Army. The Von Braun group had earlier submitted a proposal for a rocket for launching the IGY satellite to the committee duly charged with launcher selection. In what I understand to have been a fair competition, the winner was the Navy’s Vanguard proposal. However, the Medaris-Von Braun group was not one to be stopped by a mere decision of higher authority, and they went ahead and designed a new satellite launcher which they named the Jupiter C. […]

This Jupiter C was not really a Jupiter; rather, it was a Redstone plus upper stages consisting of clusters of small solid rockets. Its ostensible purpose was testing nose-cone materials for Jupiter, but the actual velocity attained (and not accidentally) was more nearly that of an Atlas, the development of which was the sole responsibility of the Air Force. Even on its very first launch, it carried an additional dummy stage, “filled with sand instead of power,” which if properly filled and fired could have been used to send it into orbit well in advance of Sputnik and the IGY.

I’d never heard this before, and it was surprising to learn that Von Braun and Co. could have, if they had been allowed to go full-tilt, beaten the Soviet Sputnik program. It is interesting to imagine what the ensuing decades would have been like had that occurred, and whether the US space program would have received the degree of investments that it did as a result of being perceived as behind the Soviets.

York references a book called “Countdown for Decision” by John B. Medaris, which is not available online — a used copy is now on its way to me, however. Seems like an interesting read.

Also, I think that perhaps York means “bootstrapped” rather than “bootlegged” in the first quoted sentence. Hard to tell, though, and I don’t imagine that the former term was in anything like the common usage (dictated by its usage in the IT field) it is in today, when the book was written in the 70s.

0 Comments, 0 Trackbacks

[/technology] permalink

Mon, 25 Jul 2016

As a result of an interesting link on Hacker News, specifically to a post on Alex Wellerstein’s blog “Nuclear Secrecy” called “A brief history of the nuclear triad” — which is a good read and thoroughly recommended — I discovered the text of Herb York’s 1978 autobiography ‘Race to Oblivion: A Participant’s View of the Arms Race’ online as HTML. I can only hope that the online text is legal, because the book is otherwise unavailable except as used copies, and it is certainly still relevant.

The book seems to be typically only read or studied by those in classes dealing with arms control or strategic policy, which is a bit unfortunate as there’s quite a few gems in there, completely aside from the book’s stated purpose.

In particular, the author mentions something (in chapter 5, marked as page 91) about the defense budget that anyone who has worked in the Federal sphere can probably relate to:

Defense planning is full of arbitrary figures and figurings that have been thoroughly rationalized only after the fact. The number of units of many types of equipment is almost as arbitrary; so are the total numbers of men in the various services; and hence so is the total defense budget itself. I would say that the defense budget is arbitrary by at least a factor of two. The fierce arguments that can break out over a cut of, say, five percent have their origins in the very great difficulties of making changes in large traditionbound systems and not in the fact that the numbers as they originally stood were correct in any absolute sense. Thus, the real reason that this year’s defense budget is so and so many billion dollars is simply that last year’s defense budget was so and so many billion, give or take about five percent. The same thing, of course, applies to last year’s budget and the budget of the year before that. Thus the defense budget is not what it is for any absolute reason or because in any absolute sense the total cost of everything that is supposedly truly needed comes out to be precisely that amount, but rather it is the sum total of all the political influences that have been applied to it over a history of many years, and that have caused it to grow in the way that it has grown.

Or, to borrow the technical term, what York is suggesting is that the defense bureaucracy, viewed as a system, basically has a fixed slew rate. You can expand or contract the defense budget, but because the system itself resists change, it’s very rare to have the political will to change it by more than 5% or so per budget cycle. It further looks more than a bit suspicious for this slew rate to work out so roundly to 5%, a number that we find deliciously convenient on account of our five digits. It makes me wonder if this value isn’t chosen — consciously or otherwise — as the breaking point between the forces of change and forces of stability quite often in budget negotiations.

I’m not convinced that this relatively-low maximum slew rate is necessarily a bad thing, when you are dealing with an institution as large as the DoD: it would probably be bad if it were subject to political whims that could change the budget more greatly than they do from year to year, and the result would almost certainly be more favoritism, if not outright corruption, but it does present a significant challenge: with that limit taken on premise, if you want the budget to be a certain amount by a certain time, or if you want it to be focused on some set of priorities at some future date, you have to start pushing it in the right direction far in advance of the target.

That in combination with relatively short political-leadership cycles (which tend to be ~8 years in the Executive branch and not too much longer in the House side of the Legislative; the Senate is somewhat slower-moving, but not by orders of magnitude or anything) creates a problem to getting anything intelligent done at all, outside of a crisis. (Others may disagree, but I still have some faith in our institutional ability to react quickly when the chips are down; it’s just a hell of an expensive way to run a country.)

In the coming decades, I think the challenge for established nation-state actors in the face of new adversaries — particularly non-state actors like, but not necessarily limited to, terror groups — is going to be not letting those groups permanently outmaneuver them by getting inside the OODA loop of the established players to such an extent that they become unable to adequately respond.

The silver lining to this for the West, if it can be said to be much of one, is that there’s no evidence that the emerging superpower states such as China and India are any better at all of this, or have a faster organizational “slew rate”, than we do. On this issue, we’re all basically in the same boat, and it’s a very large, very massive, and very slow-to-maneuver one.

0 Comments, 0 Trackbacks

[/politics] permalink

Sat, 23 Jul 2016

As part of making comments great again, I did a little hacking on Frank Hecker’s Feedback plugin for Blosxom in order to make it work with Perl 5.22, which is what the SDF currently uses by default.

It’s not exactly great moments in software engineering or anything, but in case anybody else was running into the same problem, I put the changed — I won’t go so far as to say “improved”, since I’ve barely tested it yet — version up on Github.

For now, the changes are only in the “dev” branch, while “master” contains Hecker’s original:
https://github.com/kadin2048/blosxom-feedback/tree/dev

Anyone else still using Blosxom and the Feedback plugin is encouraged to play with it and test it out. The most important change is probably this one, which may or may not fix a parameter-sanitization vulnerability. I have no evidence to suggest that Feedback actually had the vulnerability; the problem was discovered in Bugzilla, which also uses the Perl CGI module, which led to the addition of a security warning.

The (potential) issue that this solves is discussed in the article “New Class of Vulnerability in Perl Web Applications” by Gervase Markham, and the change to CGI.pm is mentioned in the comments.

Some other changes made to Feedback include adding support for SMTP Auth, and the ability to specify a port for SMTP mail submission. These are useful if you need to use a standalone mailhost that requires authentication and use of port 587, which is increasingly common in shared-hosting environments.

0 Comments, 0 Trackbacks

[/meta] permalink

While poking around in the blog’s configuration files in order to fix the error that Perl had decided to throw with the feedback plugin, I noticed that Google was apparently unhappy with the lack of “mobile friendly” features. (In other words, it looked like shit.)

Not being one to argue with Google, I did the bare minimum required to make the site not-quite-terribly offensive when viewed from a phone or tablet. Not really because I think many people will actually use it that way, but mostly because Google now dings you in your search rankings if you don’t, and it’d be nice if some of the problems and solutions I’ve documented here were at least visible to people who are searching with the right terms.

So, there you go, Google. Progress.

0 Comments, 0 Trackbacks

[/meta] permalink

Thu, 21 Jul 2016

Do try to contain your excitement. The fact that comments had been failing silently for so long, and nobody seemed to notice and/or care (least of all me) is probably suggestive that it wasn’t really a problem that needed to be solved.

Nonetheless, it was easier to solve the problem than it was to just remove comments, and once or twice in the past I’ve had comments that were actually useful and insightful, so what the hell. They ought to work now.

If for some reason you get an error when you’re trying to comment (on an article that’s less than 90 days old), feel free to drop me an email.

0 Comments, 0 Trackbacks

[/meta] permalink

Fri, 24 Jun 2016

Just a quick tip, because I found this information absurdly hard to find online using the search terms I was using. If anyone else out there has a Dell Latitude E6410 laptop, and wants to use it under Linux and achieve the same scrolling behavior as under Windows, using the big center button under the ‘DualPoint Stick’ (Dell’s term for the Touchpoint-ish control in the middle of the keyboard) to scroll, here’s what you need to do:

Create a new file in /usr/share/X11/xorg.conf.d/; I called it 60-wheel-emulation.conf, although the filename isn’t especially important as long as it doesn’t start with a number lower than the other files in the directory.

E.g. you can just do:

$ sudo emacs /usr/share/X11/org.conf.d/60-wheel-emulation.conf

In the file, add the following:

Section "InputClass"
   Identifier "Wheel Emulation"
   MatchProduct "DualPoint Stick"
   Option "EmulateWheel" "on"
   Option "EmulateWheelButton" "2"
   Option "XAxisMapping" "6 7"
   Option "YAxisMapping" "4 5"
EndSection

This activates a feature called (as you may have figured out) Wheel Emulation, which simulates scroll wheel behavior when a button is pressed and the mouse — or in this case, the pointing stick — is moved. In Windows, this is the default behavior for the Dell DualPoint, but in Linux, the default behavior is for that button to behave as an (absurdly large) traditional middle-click mouse button, which pastes the clipboard.

On a regular mouse, the Linux behavior (paste) is arguably a lot more useful, particularly if you also have an actual scrollwheel. But on the E6410, with the pointing stick, I think that scrolling is a lot more common of an interaction than paste, and I found that I really missed it.

This restores the functionality to what you may be used to.

Further information can be found at this Unix Stackexchange question which is where I got the original tip. Note that you can’t just copy and paste from that page and have it work on a Dell; the product name is wrong. You can determine the product name as described there, using the xinput --list command, however, if you have another model or brand of laptop.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Sun, 20 Mar 2016

Or, “So You Bought This Thing on eBay, Now What?”

TL;DR version: The important bits

If you need to do a factory reset on a Linksys SRW2024P, you will need a DB9 female-to-female straight through cable, not a null modem cable or a typical Cisco cable or anything else. Connect at 38400 8N1, turn the switch on while holding down Ctrl-U in the terminal, at the firmware menu select “D” for delete, and then delete the “startup-config” file. Reboot the switch and it should be reset to 192.168.1.254 and username ‘admin’ without a password.

The whole story

By way of background, I’d been looking for a new ‘core switch’ for my home network for a while, to replace the grown-not-designed arrangement of crummy 5-port desktop switches that had been slowly proliferating throughout the house. A while back in a fit of DIY hubris, I managed to run a lot of Cat 5e cabling through the walls of the house, end-running them to allow me to get a single big switch.

While Ethernet switches are not exactly expensive these days, I had a couple of requirements: I wanted Gigabit, and not just on a couple of uplink ports, but on every port, and I also wanted Power Over Ethernet, so that I could drive IP phones, cameras, wireless APs, and other gadgets in the future without individual power supplies.

While Ethernet switches in general are cheap, GigE + PoE switches are not. You can easily drop several hundred dollars on a new one, and you have to get fairly high up into ‘business class’ territory to find the right mix of features (which is fine, since consumer networking equipment is largely garbage). However, after trolling through eBay for a few days, I noticed an exception: the Linksys SRW2024P. For reasons that aren’t immediately clear, there were a fair number of these things on the ‘bay for around $100, which is a pretty great deal for a managed GigE switch, even before the POE feature.

So, of course, I bought one. And then the fun began.

First, a few notes about the SRW2024P, in case you are thinking about buying one: first, they are loud. One discussion thread described them as “datacenter loud”, and that’s probably fair. You do not want to have one of them in your bedroom or home-office, even in a closet. You might not even want it on the same floor as your bedroom or office, depending on how big your house is. It has a couple of very high-RPM fans that are just obnoxious. Second, the switch you buy will almost certainly not be factory-reset. At least, mine wasn’t, and most people asking questions on support forums don’t seem to have gotten them that way, either.

I think the current crop of eBay specials must be corporate datacenter pulls, and whoever previously owned them was smart enough not to leave them in their default configuration. Good for them, annoying for the next person.

The SRW2024P doesn’t have an easily-accessible reset button. In fact, as far as I can tell, it doesn’t have a reset button anywhere. To do a factory reset, you have to log into the switch via the serial port and wipe the settings file from the firmware. And this is where things get really fun, and brings me to the whole point of this post.

Linksys, aka Cisco, in their infinite wisdom and/or greed, decided against putting a regular serial-console port on the SRW2024P. The port that’s on the front of the unit is a RS232 port, but with the pins arranged in such a way that virtually no widely-available serial cable will work.

There is a lot of misinformation floating around concerning the SRW2024P’s serial port. In particular, there are many suggestions online that you need to use an RS232 null modem cable to connect it to a computer. This is incorrect, and a null modem cable will not work.

What you need is a DB9 female-to-female, straight through cable. Which is not a null modem cable. A null modem cable has the Transmit and Receive Data pins crossed, so that “transmit” at one end of the cable arrives on the “receive” pin at the other; this allows two computers (“DTEs” or “data terminal equipment” in RS232 lingo) to communicate without a pair of modems (“DCEs”, “data communication equipment”, in the middle). Hence the ‘null modem’ name.

Typically, devices with DB9 male ports are DTEs, and female DB9 or DB25 ports are DCEs. If you still have a box of 1990s junk around somewhere, feel free to look at an actual modem. RS232 straight-through cables are typically Male to Female, while null modem cables are typically Female to Female. This is by convention, not Galactic Law or anything, but it’s widely followed.

Except by the SRW2024P. It won’t work with a null modem cable, despite the male DB9 port on the front leading you to (reasonably) think that it would. I tried a number of null modem cables, including the programming cables used by a variety of other switches and routers. None of them worked.

Basically, the SRW2024P has the TXD and RXD pins already swapped inside its DB9 Male connector on the front. This is stupid, because it means you can’t use either a standard null modem cable or a standard straight through cable, because the genders don’t match, but that’s what Linksys did. When the switches were sold new they reportedly came with a special cable, but good luck finding one now.

I wasn’t able to easily find any DB9 F-F straight-through cables, locally or for a reasonable price online; they just aren’t something that get used very often. The cheapest and easiest route was just to create one. To do the job, I just used a couple of these DB9 screw terminal breakout boards, but you could also use a couple of DB9-to-8P8C adapters and a piece of straight-through Cat5. Whatever works. But the important part is that the TXD at one end is wired to TXD at the other, RXD to RXD, and Ground to Ground. None of the other pins seem to matter, since there’s no flow control.

Anyway, once you get the correct cable, the reset process is pretty straightforward:

  1. Connect up the cable

  2. Configure your terminal for 38400 baud, 8 data bits, 1 stop bit, no parity. (Aka “38400 8N1”) I used Minicom on Linux, but you could use any terminal emulator; nothing fancy.

  3. Turn the switch on, while watching the terminal. You should at least see some boot messages. It’s possible for the switch to be set to serial settings other than 38400 8N1, but it seems as though the firmware is always set that way. So if the switch is working and the serial connection is correct, you should see something.

  4. “Try before you pry”, as the fire service saying goes. Before screwing around with the factory reset, it’s worth giving the default login password a try. (It’s ‘admin’ as the user with no password.) In my case this didn’t work, but it’s always worth a shot.

  5. Power cycle the switch. As you turn it back on, press Ctrl-U on the terminal. Within the first second or so of boot, this should drop you into a firmware menu. Pressing ‘D’ for Delete will show you a list of files in the switch’s firmware.

  6. Delete the ‘startup-config’ file but nothing else. Power cycle the switch again. Don’t be alarmed if it takes a while to boot back up. (I had to power cycle it twice; the first time I don’t think I left it unplugged long enough. Give it 10s or so.)

What you should end up with is what you probably wanted in the first place: a switch in factory-fresh condition. From there, you can either continue configuring it via the serial connection, or use the web interface. Beware that the web interface seems to perform poorly on anything except for IE, though.

References and Anti-Insomnia Treatments:

  • How to reset the Switch??? - one of the only useful threads I found on Linksys’ support forums.

  • Real console on Linksys 2024P - I haven’t tried this procedure yet, but it’s allegedly a way of getting a ‘power user’ console on the switch via Telnet, once you’ve factory reset it.

  • Linksys SRW models password reset - One of the few articles that correctly identified the necessary straight-through cable, but it tells you to press Esc on boot to access the firmware reset menu; on mine, I had to press Ctrl-U as documented elsewhere. Perhaps other SRW models use Esc?

0 Comments, 0 Trackbacks

[/technology] permalink