Kadin2048's Weblog
JulAug Sep
Oct Nov Dec


Sat, 08 Jul 2017

After yet another mysterious Perl-related SNAFU, I decided that it’s time to say goodbye to the venerable “Blosxom” engine. As of this entry, I don’t plan to add any new entries to this blog, and I have kludged blosxom.cgi into working, however temporarily, just so that I can do one last final static HTML rendering of all the posts.

The result is that all the existing content should stay where it is, and all links should remain unbroken. I have a terrible hatred for people who break links or pull content down out of sheer laziness; as it costs very little to keep a bunch of static HTML files around, I see no reason not to do so — without the live CGI running, there’s very little security issues, and it just seems like The Right Thing To Do, on the off chance anyone finds some of my old posts useful or interesting.

Going forward, I plan to spin up a new site using Jekyll, and may in fact import all the old entries from this blog into it (making them instantly a lot less ugly, since I don’t plan to roll my own HTML and CSS this time around). However, those will end up, at least in my current plan, in a different URL namespace (perhaps /blog/ instead of /weblog/ — the latter seems a bit dated by modern standards anyway), so that all the old links will continue merrily onward until either the SDF goes down for good, or the heat death of the universe.

Anyway, if there’s anyone else out there using Blosxom in 2017… good luck. My recommendation, given modern hardware and storage capabilities combined with the hostile security environment the Web has become, is to use it as a static site generator (similar to Jekyll) rather than a live CGI with mod_rewrite as most of us start off using it originally. If you wanted to give a little nod to modern conveniences, you could certainly trigger the static regeneration via a Git hook, just like Github does with Jekyll.

When I first started this blog, the idea of a dynamic site without a database seemed pretty cool. Today, the idea of a site without a database is still cool, but I’ve become gradually less and less enthused about the idea of dynamic sites in general, largely because of the security/maintenance tradeoffs. If you don’t have time to maintain a secure dynamic site, you don’t have time to maintain a dynamic site. Static HTML, in contrast, is much easier as long as your webhost is reasonably competent.

But there’s not much compelling about Blosxom today, given that Jekyll and a variety of other static site generators exist, and have a much more robust ecosystem of themes, plugins, and documentation available.

I can’t say that Blosxom owes me anything at this point. It ran this blog for more than a decade, and that’s longer than most pieces of software ever get to exist in production.

0 Comments, 0 Trackbacks

[/meta] permalink

Sat, 25 Mar 2017

It has been widely reported this week that Google has formally announced plans to kill off Google Talk, its original popular IM product which (for most of us) was supplanted by “Hangouts” a few years back.

I still think that Google Talk was the high-water mark for Google’s “over the top” (OTT) chat efforts; it was reliable, standards-based, interoperable with third-party clients and servers, feature-rich, and well integrated with other Google products such as Gmail. You could federate Google Talk with a standard XMPP server and communicate back and forth across domains, or use a third-party desktop client like Adium or Pidgin. Google Talk message logs appeared in Gmail almost like email messages, making them easy to backup (you could retrieve them with IMAP!) or search.

Looking back on those halcyon days, we hardly knew how good we had it.

Everything Google has done since then has made the basic user experience gradually more shitty.

Today, Hangouts works with Adium and Pidgin sometimes, depending on what Google has done to break things lately. XMPP federation with other servers is being disabled, for no good reason that I can tell, in the near future, finally making it the walled garden that Google apparently wants. Integration with other products is inconsistent: to use some Hangouts features, you need to use the primary web interface (hangouts.google.com), but other key features — message search being the biggest one — are missing entirely, and require you to go into Gmail. Gmail! Why the fuck do I need to go into my email client to search my messaging logs? Who knows. That’s just how Google makes you do it. And of course in Gmail, Hangouts logs are no longer stored as emails, they’re some bizarre format where logs are broken up arbitrarily into little chunks (sometimes one log chunk per message), and in some cases there’s no way to get from a message in Gmail’s search results back to a coherent view of the conversation that it occurred in.

In the meantime they added voice, which is sorta neat but nobody I know really uses, and video / screensharing, which is very cool but uses its own web interface and seems suspiciously like a bolt-on addition.

Basically, Hangouts is broken.

But rather than fix it, Google seems determined to screw it up some more, in order to turn it into an “enterprise” messaging system (read: doomed Slack competitor). On the chopping block in the near term is the integration of carrier SMS and MMS into the Hangouts mobile app. I guess because enterprise users don’t use text messages..? Only Google knows why, and they’re not saying anything coherent.

For us poor plebs, they created “Allo”, a WhatsApp clone combining all the downsides of OTT messaging and carrier SMS into one shit sandwich of a product. (Just the downsides of carrier SMS, like requiring a POTS phone number; it doesn’t actually do carrier SMS, of course. That’s a new, separate app.) The sole deal-sweetener was the inclusion of Google Assistant, which could have just as easily been added into Hangouts. But instead they made it an Allo exclusive, ensuring that nobody really got to use it. Bravo.

Here’s the worst part: Hangouts is broken, Google is not going to fix it, and the best alterative for Joe User right now is … drumroll, please … Facebook Messenger.

That’s right, Facebook Messenger. Official platform of your 14-year-old nephew, at least as of five years ago, before he moved on to Snapchat or something else cooler. That’s the competition that Google is basically surrendering to. It’s like losing a footrace to someone too stupid to walk and chew gum at the same time, but only because you decided it’d be fun to saw your own legs off.

In fairness — very, very grudging fairness, because Facebook at this point is about one forked tail away from being Actually The Devil Himself in terms of user-hostility — Facebook Messenger isn’t… all that bad. I can’t believe I just wrote that. I feel dirty.

However, it’s hard to avoid: Facebook’s Messenger is just the better product, or is likely to become the better one soon. Let us count the ways:

  • It has the userbase, because everyone with a Facebook account also has a Messenger account. However it doesn’t require FB membership to use Messenger: you can create a Messenger-only account by validating a phone number (much like WhatsApp or Signal or Allo). So it’s got all of them beat there, and network effects mean that the number of people already using the service is always the most important feature of a messaging service.

  • It allows end-to-end encryption but isn’t wed to it (as Signal is), meaning it can do things that are hard to do in a 100% E2E encrypted architecture, like letting you simultaneously use multiple devices in the course of a day and have all your messages arrive to all of them. All your logs can be searched from any device, too.

  • Speaking of logs, Facebook already has better facilities for searching your past conversations than Hangouts. (The only service that seems to be better is Slack, which is somewhat ironic given that Google apparently wants Hangouts to be its Slack competitor, and Google can’t beat Slack at the one thing that you’d expect Google to actually do well.) Finding a conversation based on a keyword and then being able to read it in context is already far easier from Messenger’s website than from Gmail’s, and of course you can’t search conversations from Hangouts’ main website at all.

  • On mobile, at least on Android, the Hangouts app is better for the moment, but I don’t expect that to stay the same once Google starts forklift-upgrading it to be “enterprisey-er”. And the Messenger app isn’t terrible (unlike the main Facebook app, which is an unstable battery- and data-hogging testament to bad ideas poorly implemented). The recent inclusion of Snapchat-like features nobody really asked for notwithstanding, Messenger does its job and has some occasional nice features, like very low-friction picture and short video messaging. At least on my device, it hasn’t crashed or ANRed in as long as I can remember.

Personally, I’ll probably continue to use Hangouts until the bitter end, because I’m lazy and resistant to change, but I suspect Messenger is where most of my friends are going to end up, and those who don’t want to do use a FB product will largely just end up getting carrier SMS/MMS messages again.

Congrats, Google. You could have owned messaging, but you screwed it up. You could probably still salvage the situation, but nothing I’ve seen from the company indicates that they care to, or are even capable of admitting the extent of their miscues.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

Fri, 24 Mar 2017

Well, another month has gone by, and I still haven’t migrated this pile of bits to a better blog engine. And yet again, Perl + Blosxom + the plugins I’m using shit the bed.

The issue this time, for future reference, was:

File is not a perl storable at
line 383, <FILE> line 628, at
/blosxom-plugins/calendar line 322.

The solution, for the moment, was to simply delete the Calendar plugin’s cache (that’s what’s being referenced on line 322) and allow it to regenerate.

Why this keeps happening, I’m not sure. I hate to invest the effort to figure it out, when I may well be the only person left on the Internet using this particular combination of software, and the proper solution is clearly a migration to another platform.

But if you happen (by some chance) to run into this yourself, and this blog hasn’t crashed (or you’re reading somebody’s helpful cache of this page!), that is seemingly a temporary fix. A hacky fix would be to set up a cron job to periodically clear the Calendar plugin cache, which would at least put an upper bound on the amount of downtime that the problem can cause, but that’s … pretty ugly, even for me, and I’m largely immune to opposition to solutions on aesthetic grounds these days.

0 Comments, 0 Trackbacks

[/meta] permalink

Sun, 12 Mar 2017

Related to yesterday’s post about the AP article confirming that, in fact, modern cryptography is pretty good, there’s a reasonably decent discussion going on at Hacker News in response, with a mixture of the usual fearmongering / unjustified pessimism but also some very good information.

This post, by HN user “colordrops”, is particularly worth discussing, despite falling a bit on the “pessimistic” side of things:

It seems that most people are completely in the dark when it comes to security, including myself, but there are some principles that should be unwavering that regularly get ignored again with every new iteration of “secure” software:

  • If there is a weak layer in the stack, from the physical layer to to UI, then the system is not secure. Even if your messaging app is secure, your messages are not secure if your OS is not secure
  • If the source code is not available for review, the software is not secure
  • If you or someone you trust has not done a full and thorough review of all components of the stack you are using, the software is not secure
  • Even if the source code is available, the runtime activity must be audited, as it could download binaries or take unsavory actions or connections.
  • On the same note, if you do not have a mechanism for verifying the authenticity of the entire stack, the software is not secure.
  • If any part of the stack has ever been compromised, including leaving your device unlocked for five minutes in a public place, the software is not secure.

I could go on, and I’m FAR from a security expert. People compromise way too much on security, and make all kinds of wrong assumptions when some new organization comes out and claims that their software is the “secure” option. We see this with apps like Telegram and Signal, where everyone thinks they are secure, but if you really dig down, most people believe they are secure for the wrong reasons:

  • The dev team seems like honest and capable people
  • Someone I trust or some famous person said this software is secure
  • They have a home page full of buzzwords and crypto jargon
  • They threw some code up on github
  • I heard they are secure in half a dozen tweets and media channels

First, I have to take serious issue with the author’s use of “secure” as a sort of absolute. Thinking of “secure” as a sort of binary, is-it-or-isn’t-it state is only useful in the most academic corners of cryptography, where we can talk about an algorithm being “secure” against certain kinds of analysis or attack. It is bordering on useless when you get into the dirtiness of the real world.

Implementations are not “secure” in the absolute. Implementations may be secure within a certain threat space, or for a certain set of needs, but security is always relative to some perceived adversary. If your adversary has unlimited resources, then no implementation will ever be secure over a long timescale. (An ‘unlimited resources’ adversary will just build Dyson spheres around a few nearby stars and use them to power computronium bruteforce machines. Good thing you don’t really have an unlimited-resources adversary, do you?)

Security is all about tradeoffs. As you make an implementation more robust, it becomes more cumbersome to use. Computers have done really amazing things to make formerly-cumbersome security easier to use, but this tradeoff still exists and probably will always exist once you start talking about practical attacks.

The implementation standards for government-level security, e.g. the handling of classified information by the US DOD and similar, require electronically shielded rooms and specially vetted equipment to prevent information leakage at the endpoints. But as the last few years have demonstrated, these systems — while extremely impressive and well-constructed — have still leaked information through human factors compromises. So in that sense, anything that involves a person is arguably “insecure”. For most applications, there’s no getting around that.

Beyond that, though, the author does make some good points about users believing that a program is “secure” for the wrong reasons, including buzzword-laden webpages, unverified claims in the media, or endorsement by famous people who do not have a significant reputation in the IT security communtity at stake. These are all real problems that have been exploited to push poorly-designed software onto users who deserve better.

Many modern apps, including not only Telegram and Signal but also Facebook Messenger in its end-to-end encrypted mode, and various corporate email systems, are “secure enough” for particular needs. They’ll almost certainly hide what you’re doing or saying from your family, friends, nosy neighbors, boss (provided you don’t work for an intelligence or law enforcement agency), spouse, etc., which is what I suspect all but a very small fraction of users actually require. So, for most people, they are functionally secure.

For the very small number of users whose activities are likely to cause them to be of interest to modern, well-funded, First World intelligence agenices, essentially no application running on a modern smartphone is going to be secure enough.

As others on HN point out, modern smartphones are essentially “black boxes” running vast amounts of closed-source, unauditable code, including in critical subsystems like the “baseband”. One anonymous user even alleges that:

The modifications installed by your phone company, etc. are not open source. The baseband chip’s firmware is not open sourced. I’ve even heard of DMA being allowed over baseband as part of the Lawful Intercept Protocol.

There is, naturally, no sourcing on the specific claim about DMA over the cellular connection, but that would be a pretty neat trick: it would essentially be one step above remote code execution, and give a remote attacker access to the memory space of any application running on the device, perhaps without any sign (such as a typical rootkit or spyware suite would leave) that the device was tapped. Intriguing.

I am, personally, not really against intelligence agenices having these sort of capabilities. The problem becomes when they are too easy or cheap to use. The CIA’s stash of rootkits and zero-days is unlikely to be deployed except in bona fide (at least, perceived to be bona fide) national security situations, because of the expense involved in obtaining those sort of vulnerabilities and the sharp drop in utility once they’ve been used once. They’re single-shot weapons, basically. If some were to get their way and manage to equip every consumer communications device with a mandatory backdoor, though, it would be only a matter of time before the usage criteria for that backdoor broadened from national security / terrorism scenarios, to serious domestic crimes like kidnapping, and then on down the line until it was being used for run-of-the-mill drug possession cases. And even if you think (and I will strongly disagree, but it’s out of scope for this post) that drug possession cases deserve the availability of those sort of tools, in the process of that trickling-down of capabilities, it would also doubtless fall into the hands of unintended third parties: from the cop who wants to see if their wife or husband are cheating on them, to organized crime, to Internet trolls and drive-by perverts looking for nude photos. Such is the lifecycle of security vulnerabilities: it all ends up in the hands of the “script kiddies” eventually.

Nobody has found a way to break that lifecycle so far: today’s zero-days are tomorrow’s slightly-gifted-highschool’s tools for spying on the girl or boy they’re fancying in class. Intentionally creating large vulnerabilities — which is exactly what a backdoor would be — just means everyone along the food chain would get a bigger meal as it became more and more widely available.

The only solution, as I see it, is to keep doing pretty much what we’ve been doing: keep funding security research to harden devices and platforms, and keep funding the research on the other side of the equation (both in the private sector and via the IC) who try to pick away at it, and hope that the balance remains relatively constant, and similar to what we currently enjoy: enough security for the average person to keep their communications private from those they don’t want to share them with, impressively secure communications for those who want to put in the effort, but enough capabilities on the law-enforcement and intelligence side to keep organized crime and terrorism disrupted in terms of communications.

0 Comments, 0 Trackbacks

[/technology] permalink

Sat, 11 Mar 2017

Way back in 2013, I wrote about the NSA leaks and how I didn’t think that they signified any fundamental change in the balance of power between cryptographers and cryptanalysts that has been going on for centuries. It would seem that the New York Times has finally worked through their backlog and more or less agrees.

(The article in question comes from the AP, so if the NYT website doesn’t want to load or gets paywalled or taken out by a Trump Republic drone strike at some point in the future, you can always just Google the title and turn it up. Probably.)

The tl;dr version:

Documents purportedly outlining a massive CIA surveillance program suggest that CIA agents must go to great lengths to circumvent encryption they can’t break. In many cases, physical presence is required to carry off these targeted attacks. […] It’s much like the old days when “they would have broken into a house to plant a microphone,” said Steven Bellovin, a Columbia University professor who has long studied cybersecurity issues.

In other words, it’s pretty much what we expect the CIA to be doing, and what they’re presumably pretty good at, or at least ought to be pretty good at given the amount of time they’ve had to get good at it.

Which means that I was pretty much on target back in 2013, and the sky-is-falling brigade was not:

My guess […] is that there’s nothing fundamentally wrong with public key crypto, or even in many carefully-built implementations. It’s when you start optmizing for cost or speed that you open the door.

So: if you are very, very careful, you will still be able to build up a reasonably-secure infrastructure using currently available hardware and software. (‘Reasonably secure’ meaning resistant to untargeted mass surveillance, not necessarily to a targeted attack that might include physical bugging: that’s a much higher bar.) However, some code may need to be changed in order to eliminate any reliance on possibly-compromised components, such as hardware RNGs / accelerators that by their nature are difficult to audit.

In the past few years, most widely-used crypto libraries have moved away from hardware PRNGs that are thought to be suspect, and generally taken a less seat-of-the-pants approach to optimizing for speed than was previously (sometimes) the case. For security, this is largely a good thing.

In terms of intelligence-gathering capability, it’s presumably a degradation vs. the mass-intercept capability that certain agencies might have had when more traffic was unencrypted or poorly-encrypted, but it was foolish to believe that situation was going to go on forever. End-to-end crypto has been a goal of the pro-security tech community (formerly and now cringingly referred to as “cypherpunks”, back when that seemed like a cool name) for almost two decades, and would have happened eventually.

The IC still has significant tools at its disposal, including traffic analysis and metadata analysis, targeted bruteforcing of particular messages, encrypted content, or SSL/TLS sessions, endpoint compromises, human factors compromise, and potentially future developments in the quantum cryptography/cryptanalysis space. Without defending or minimizing Snowden et al, I do not think that it means the end of intelligence in any meaningful sense; those predictions, too, were overstated.

Anyway, it’s always nice to get some validation once in a while that the worst predictions don’t always turn out to be the correct ones. (Doesn’t quite make up for my hilariously blown call on the election, but at least I wasn’t alone in that one.)

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 10 Feb 2017

I realized today that this blog had crashed, again. It’s becoming obvious that I’m going to need to find some other hosting / server-side engine solution, because the downtime and associated maintenance (mostly due to Perl’s constant non-backwards-compatible changes, from what I can tell) are really not working for me anymore. I will try, as a Good Internet Citizen, to keep all the posts up and keep the URLs stable, but who knows what will happen. I may well just render the existing blog corpus to static HTML pages and call it quits. It has been a fairly good run at this point.

I’m already getting penalized by Google for not being “mobile friendly” enough, which I think I’ve decided I don’t really care about (in fairness, the blog pages look mostly OK on my Android device), but it’s yet another task that one has to do now if you want to put content online. The bar has gotten a lot higher from when I started this thing, at least it seems.

Anyway: the culprit today was a corrupted cache file used by the Blosxom Calendar plugin. Chasing this down was made more complicated than it needed to be, because by default the Calendar plugin stores its cache in a dotfile, which most of my other plugins don’t. No clue why; that seems like slightly unfriendly behavior.

I tweaked the configuration a bit so that the file is now regularly visible. Anyone else using Blosxom+Calendar may or may not be interested in the change.

0 Comments, 0 Trackbacks

[/meta] permalink

Thu, 20 Oct 2016

For all the stupidity of the current Presidential election, one interesting discussion that it has prompted is a resurrection of the old debate over nuclear strategy, and particularly the strategy of “launch under attack” (aka and better known as “Launch On Warning”). Jeffrey Lewis has an article, “Our Nuclear Procedures Are Crazier Than Trump”, in Foreign Policy which ties this into current events, prompted by recent statements by both candidates.

Much of the discussion in the last 24 hours has centered on whether Hillary Clinton inadvertently disclosed classified information when she mentioned, during the third debate, that the President would have only “four minutes” to decide on whether to respond in the event of a large-scale attack on the continental U.S. by an adversary. This is not, at least to me, a particularly interesting discussion; nothing Clinton said goes beyond what is in the open literature on the topic and has been available for decades.

What is interesting is that, in 2016, we’re talking about Launch On Warning at all. Clinton’s “four minutes” should be a thing of the past.

I mean: the other President Clinton supposedly moved the U.S. away from LOW in a 1997 Presidential Directive, instead putting U.S. forces on a stance of second-strike retaliation only after actually being on the receiving end of a successful attack. This is a reasonable posture, given that the U.S. SSBN force alone has enough destructive power to serve, independently of the rest of the ‘nuclear triad’, as a reasonable deterrent against a first strike by another global power.

What’s interesting is that, at the time, the Clinton administration downplayed the move and said that it was merely a continuation of existing policy dating from the Reagan years and expressed in previous PDDs. A Clinton spokesperson reportedly said at the time: “in this PDD we direct our military forces to continue to posture themselves in such a way as to not rely on launch on warning—to be able to absorb a nuclear strike and still have enough force surviving to constitute credible deterrence.” (Emphasis mine.)

The actual Presidential Directives are, as one might expect, still classified, so we don’t have a lot other than hearsay and the statements of various spokespeople to go off of. But it would appear safe to say that the U.S. has not depended on LOW since at least 1997, and probably since some point in the 80s. I think it’s likely that the original change was prompted by a combination of near-miss events in the 1970s (e.g. Zbigniew Brzezinski’s infamous 3 A.M. wakeup call on November 9, 1979), plus the maturation of the modern SSBN force into a viable second-strike weapon, which together caused U.S. leaders to question the wisdom of keeping the nuclear deterrent on a hair trigger. As well they probably should have, given the risks.

In fact, being able to lower the proverbial hammer and relax the national trigger finger somewhat is probably the biggest benefit of having an SSBN force. It’s why other nuclear powers, notably the U.K., have basically abandoned ground-based nuclear launch systems in favor of relying exclusively on submarines for deterrence. The U.K., famously, issues “Letters of Last Resort” to their submarine captains, potentially giving them launch authority even in the absence of any external command and control structure — ensuring a retaliatory capability even in the event of complete annihilation of the U.K. itself. While this places a lot of responsibility on the shoulders of a handful of submarine captains, it also relieves the entire U.K. defense establishment from having to plan for and absorb a decapitation attack, and it certainly seems like a better overall plan than automated systems that might be designed to do the same thing.

In the U.S. we’ve never gone as far as the U.K. in terms of delegation of nuclear-launch authority (perhaps because the size of the U.S. nuclear deterrent would mean an unacceptable number of trusted individuals would be required), but it’s been a while since any President has necessarily needed to decide whether to end the world or face unilateral annihilation in a handful of minutes. They would need to potentially decide whether to authorize a U.S. ICBM launch in that very short window of time, but they wouldn’t lose all retailiatory capacity if they chose not to, and it is difficult to imagine — given the possibility and actual past experience with false alarms — that a sane president would authorize a launch before confirmation of an actual attack on U.S. soil.

So why did the “four minute” number resurface at all? That’s a bit of a mystery. It could have just been a debate gambit by Clinton, which is admittedly the simplest explanation, or perhaps the idea of Launch On Warning isn’t completely gone from U.S. strategic policy. This is not implausible, since we still maintain a land-based ICBM force, and the ICBMs are still subject to the first-strike advantage which produced Launch On Warning in the first place.

And rather than debating the debate, which will be a moot point in a very few weeks, the real question we ought to be asking is why we bother to maintain the land-based strategic nuclear ICBM force at all.

Here’s a modest proposal: retire the ICBM force’s strategic nuclear warheads, but retain the missile airframes and other launch infrastructure. Let other interested parties observe the nuclear decommissioning, if they want to, so that there’s no mistaking a future launch of those missiles as a nuclear one. And then use the missiles for non-nuclear Prompt Global Strike or a similar mission (e.g. non-nuclear FOBS, “rod from God” kinetic weapons, or whatever our hearts desire).

It ought to make everyone happy: it’s that many fewer fielded nuclear weapons in the world, it eliminates the most vulnerable part of the nuclear triad and moves us firmly away from LOW, it doesn’t take away any service branch’s sole nuclear capability (the Air Force would retain air-launched strategic capability, as a hedge against future developments making the SSBN force obsolete), and it would trade an expensive and not-especially-useful strategic capability for a much-more-useful tactical capability, and in the long term it could potentially allow the U.S. to draw down overseas-deployed personnel and vulnerable carrier strike groups while retaining rapid global reach.

It makes too much sense to ever actually occur, of course, at least not during an election season.

0 Comments, 0 Trackbacks

[/politics] permalink

Thu, 13 Oct 2016

At some point, Yahoo started sticking a really annoying popup on basically every single Flickr page, if you aren’t logged in with a Yahoo ID. Blocking these popups is reasonably straightforward with uBlock or ABP, but it took me slightly longer than it should have to figure it out.

As usual, here’s the tl;dr version. Add this to your uBlock “My filters”:

! Block annoying Flickr login popups

That’s it. Note that this doesn’t really “block” anything, it’s a CSS hiding rule. For this to work you have to ensure that ‘Cosmetic Filters’ in uBlock / uBlock Origin is enabled.

The slightly-longer story as to why this took more than 10 seconds of my time, is because the default uBlock rule that’s created when you right-click on one of the popups and select ‘Block Element’ doesn’t work well. That’s because Yahoo is embedding a bunch of random characters in the CSS for each one, which changes on each page load. (It’s not clear to me whether this is designed expressly to defeat adblockers / popup blockers or not, but it certainly looks a bit like a blackhat tactic.)

Using the uBlock Origin GUI, you have to Ctrl-click (Cmd-click on a Mac) on the top element hiding rule in order to get a ‘genericized’ version of it that removes the full CSS path, and works across page reloads. I’d never dug into any of the advanced features of uBlock Origin before — it’s always just basically worked out of the box, insofar as I needed it to — so this feature was a nice discovery.

Why, exactly, Yahoo is shoving this annoying popup in front of content on virtually every Flickr page, to every non-logged-in viewer, isn’t clear, although we can certainly speculate: Yahoo is probably desperate at this point to get users to log in. Part of their value as a company hinges on the number of active users they can claim. So each person they hard-sell into logging in is some amount more they’ll probably get whenever somebody steps in and buys them.

As a longtime Flickr user, that end can’t come soon enough. It was always disappointing that Flickr sold out to Yahoo at all; somewhere out there, I believe there’s a slightly-less-shitty parallel universe where Google bought Flickr, and Yahoo bought YouTube, and Flickr’s bright and beautiful site culture was saved just as YouTube’s morass of vitrol and intolerance became Yahoo’s problem to moderate. Sadly, we do not live in that universe. (And, let’s be honest, Google would probably have killed off Flickr years ago, along with everything else in their Graveyard of Good Ideas. See also: Google Reader.)

Perhaps once Yahoo is finally sold and broken up for spare parts, someone will realize that Flickr still has some value and put some effort into it, aside from strip-mining it for logins as Yahoo appears to be doing. A man can dream, anyway.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Wed, 14 Sep 2016

Everyone’s favorite security analyst Bruce Schneier seems to think that somebody is learning how to “take down the Internet” by repeatedly probing key pieces of “basic infrastructure” — exactly what’s being probed isn’t stated, but the smart money is on the DNS root servers. Naturally, who is doing this is left unsaid as well, although Schneier does at least hazard the obvious guess at China and Russia.

If this is true, it’s a seemingly sharp escalation towards something that might legitimately be called ‘cyberwarfare’, as opposed to simply spying-using-computers, which is most of what gets lumped in under that label today. Though, it’s not clear exactly why a state-level actor would want to crash DNS; it’s arguably not really “taking down the Internet”, although it would mess up a lot of stuff for a while. Even if you took down the root DNS servers, it wouldn’t stop IP packets from being routed around (the IP network itself is pretty resilient), and operators could pretty quickly unplug their caching DNS resolvers and let them run independently, restoring service to their users. You could create a mess for a while, but it wouldn’t be crippling in the long term.

Except perhaps as one component of a full-spectrum, physical-world attack, it doesn’t make a ton of sense to disrupt a country’s DNS resolvers for a few hours. And Russia and China don’t seem likely to actually attack the U.S. anytime soon; relations with both countries seem to be getting worse over time, but they’re not shooting-war bad yet. So why do it?

The only reason that comes to mind is that it’s less ‘preparation’ than ‘demonstration’. It’s muscle flexing on somebody’s part, and not particularly subtle flexing at that. The intended recipient of the message being sent may not even be the U.S., but some third party: “see what we can do to the U.S., and imagine what we can do to you”.

Or perhaps the eventual goal is to cover for a physical-world attack, but not against the U.S. (where it would probably result in the near-instant nuclear annihilation of everyone concerned). Perhaps the idea is to use a network attack on the U.S. as a distraction, while something else happens in the real world? Grabbing eastern Ukraine, or Taiwan, just as ideas.

Though an attack on the DNS root servers would be inconvenient in the short run, I am not sure that in the long run that it would be the worst thing to happen to the network as an organism: DNS is a known weakness of the global Internet already, one that desperately needs a fix but where there’s not enough motivation to get everyone moving together. An attack would doubtless provide that motivation, and be a one-shot weapon in the process.

Update: This article from back in April, published by the ‘Internet Governance Project’, mentions a Chinese-backed effort to weaken US control over the root DNS, either by creating additional root servers or by potentially moving to a split root. So either the probing or a future actual disruption of DNS could be designed to further this agenda.

In 2014, [Paul] Vixie worked closely with the state-owned registry of China (CNNIC) to promote a new IETF standard that would allow the number of authoritative root servers to increase beyond the current limit of 13. As a matter of technical scalability, that may be a good idea. The problem is its linkage to a country that has long shown a more than passing interest in a sovereign Internet, and in modifying the DNS to help bring about sovereign control of the Internet. For many years, China has wanted its “own” root server. The proposal was not adopted by IETF, and its failure there seems to have prompted the formation and continued work of the YETI-DNS project.

The YETI-DNS project appears, at the moment, to be defunct. Still, China would seem to have the most to gain by making the current U.S.-based root DNS system seem fragile, given the stated goal of obtaining their own root servers.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 11 Sep 2016

If you can only bear to read one 9/11 retrospective or tribute piece this year, I’d humbly suggest — if you are not already familiar — reading the story of Rick Rescorla, one of the many heroes of the WTC evacuation.

The Real Heroes Are Dead, written by James B. Stewart in The New Yorker, from February 2002, is worth the read.

0 Comments, 0 Trackbacks

[/other] permalink