Kadin2048's Weblog
2017
Months
JulAug Sep
Oct Nov Dec

RSS

Sat, 08 Jul 2017

After yet another mysterious Perl-related SNAFU, I decided that it’s time to say goodbye to the venerable “Blosxom” engine. As of this entry, I don’t plan to add any new entries to this blog, and I have kludged blosxom.cgi into working, however temporarily, just so that I can do one last final static HTML rendering of all the posts.

The result is that all the existing content should stay where it is, and all links should remain unbroken. I have a terrible hatred for people who break links or pull content down out of sheer laziness; as it costs very little to keep a bunch of static HTML files around, I see no reason not to do so — without the live CGI running, there’s very little security issues, and it just seems like The Right Thing To Do, on the off chance anyone finds some of my old posts useful or interesting.

Going forward, I plan to spin up a new site using Jekyll, and may in fact import all the old entries from this blog into it (making them instantly a lot less ugly, since I don’t plan to roll my own HTML and CSS this time around). However, those will end up, at least in my current plan, in a different URL namespace (perhaps /blog/ instead of /weblog/ — the latter seems a bit dated by modern standards anyway), so that all the old links will continue merrily onward until either the SDF goes down for good, or the heat death of the universe.

Anyway, if there’s anyone else out there using Blosxom in 2017… good luck. My recommendation, given modern hardware and storage capabilities combined with the hostile security environment the Web has become, is to use it as a static site generator (similar to Jekyll) rather than a live CGI with mod_rewrite as most of us start off using it originally. If you wanted to give a little nod to modern conveniences, you could certainly trigger the static regeneration via a Git hook, just like Github does with Jekyll.

When I first started this blog, the idea of a dynamic site without a database seemed pretty cool. Today, the idea of a site without a database is still cool, but I’ve become gradually less and less enthused about the idea of dynamic sites in general, largely because of the security/maintenance tradeoffs. If you don’t have time to maintain a secure dynamic site, you don’t have time to maintain a dynamic site. Static HTML, in contrast, is much easier as long as your webhost is reasonably competent.

But there’s not much compelling about Blosxom today, given that Jekyll and a variety of other static site generators exist, and have a much more robust ecosystem of themes, plugins, and documentation available.

I can’t say that Blosxom owes me anything at this point. It ran this blog for more than a decade, and that’s longer than most pieces of software ever get to exist in production.

0 Comments, 0 Trackbacks

[/meta] permalink

Sat, 25 Mar 2017

It has been widely reported this week that Google has formally announced plans to kill off Google Talk, its original popular IM product which (for most of us) was supplanted by “Hangouts” a few years back.

I still think that Google Talk was the high-water mark for Google’s “over the top” (OTT) chat efforts; it was reliable, standards-based, interoperable with third-party clients and servers, feature-rich, and well integrated with other Google products such as Gmail. You could federate Google Talk with a standard XMPP server and communicate back and forth across domains, or use a third-party desktop client like Adium or Pidgin. Google Talk message logs appeared in Gmail almost like email messages, making them easy to backup (you could retrieve them with IMAP!) or search.

Looking back on those halcyon days, we hardly knew how good we had it.

Everything Google has done since then has made the basic user experience gradually more shitty.

Today, Hangouts works with Adium and Pidgin sometimes, depending on what Google has done to break things lately. XMPP federation with other servers is being disabled, for no good reason that I can tell, in the near future, finally making it the walled garden that Google apparently wants. Integration with other products is inconsistent: to use some Hangouts features, you need to use the primary web interface (hangouts.google.com), but other key features — message search being the biggest one — are missing entirely, and require you to go into Gmail. Gmail! Why the fuck do I need to go into my email client to search my messaging logs? Who knows. That’s just how Google makes you do it. And of course in Gmail, Hangouts logs are no longer stored as emails, they’re some bizarre format where logs are broken up arbitrarily into little chunks (sometimes one log chunk per message), and in some cases there’s no way to get from a message in Gmail’s search results back to a coherent view of the conversation that it occurred in.

In the meantime they added voice, which is sorta neat but nobody I know really uses, and video / screensharing, which is very cool but uses its own web interface and seems suspiciously like a bolt-on addition.

Basically, Hangouts is broken.

But rather than fix it, Google seems determined to screw it up some more, in order to turn it into an “enterprise” messaging system (read: doomed Slack competitor). On the chopping block in the near term is the integration of carrier SMS and MMS into the Hangouts mobile app. I guess because enterprise users don’t use text messages..? Only Google knows why, and they’re not saying anything coherent.

For us poor plebs, they created “Allo”, a WhatsApp clone combining all the downsides of OTT messaging and carrier SMS into one shit sandwich of a product. (Just the downsides of carrier SMS, like requiring a POTS phone number; it doesn’t actually do carrier SMS, of course. That’s a new, separate app.) The sole deal-sweetener was the inclusion of Google Assistant, which could have just as easily been added into Hangouts. But instead they made it an Allo exclusive, ensuring that nobody really got to use it. Bravo.

Here’s the worst part: Hangouts is broken, Google is not going to fix it, and the best alterative for Joe User right now is … drumroll, please … Facebook Messenger.

That’s right, Facebook Messenger. Official platform of your 14-year-old nephew, at least as of five years ago, before he moved on to Snapchat or something else cooler. That’s the competition that Google is basically surrendering to. It’s like losing a footrace to someone too stupid to walk and chew gum at the same time, but only because you decided it’d be fun to saw your own legs off.

In fairness — very, very grudging fairness, because Facebook at this point is about one forked tail away from being Actually The Devil Himself in terms of user-hostility — Facebook Messenger isn’t… all that bad. I can’t believe I just wrote that. I feel dirty.

However, it’s hard to avoid: Facebook’s Messenger is just the better product, or is likely to become the better one soon. Let us count the ways:

  • It has the userbase, because everyone with a Facebook account also has a Messenger account. However it doesn’t require FB membership to use Messenger: you can create a Messenger-only account by validating a phone number (much like WhatsApp or Signal or Allo). So it’s got all of them beat there, and network effects mean that the number of people already using the service is always the most important feature of a messaging service.

  • It allows end-to-end encryption but isn’t wed to it (as Signal is), meaning it can do things that are hard to do in a 100% E2E encrypted architecture, like letting you simultaneously use multiple devices in the course of a day and have all your messages arrive to all of them. All your logs can be searched from any device, too.

  • Speaking of logs, Facebook already has better facilities for searching your past conversations than Hangouts. (The only service that seems to be better is Slack, which is somewhat ironic given that Google apparently wants Hangouts to be its Slack competitor, and Google can’t beat Slack at the one thing that you’d expect Google to actually do well.) Finding a conversation based on a keyword and then being able to read it in context is already far easier from Messenger’s website than from Gmail’s, and of course you can’t search conversations from Hangouts’ main website at all.

  • On mobile, at least on Android, the Hangouts app is better for the moment, but I don’t expect that to stay the same once Google starts forklift-upgrading it to be “enterprisey-er”. And the Messenger app isn’t terrible (unlike the main Facebook app, which is an unstable battery- and data-hogging testament to bad ideas poorly implemented). The recent inclusion of Snapchat-like features nobody really asked for notwithstanding, Messenger does its job and has some occasional nice features, like very low-friction picture and short video messaging. At least on my device, it hasn’t crashed or ANRed in as long as I can remember.

Personally, I’ll probably continue to use Hangouts until the bitter end, because I’m lazy and resistant to change, but I suspect Messenger is where most of my friends are going to end up, and those who don’t want to do use a FB product will largely just end up getting carrier SMS/MMS messages again.

Congrats, Google. You could have owned messaging, but you screwed it up. You could probably still salvage the situation, but nothing I’ve seen from the company indicates that they care to, or are even capable of admitting the extent of their miscues.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

Fri, 24 Mar 2017

Well, another month has gone by, and I still haven’t migrated this pile of bits to a better blog engine. And yet again, Perl + Blosxom + the plugins I’m using shit the bed.

The issue this time, for future reference, was:

File is not a perl storable at
/usr/pkg/lib/perl5/vendor_perl/5.24.0/x86_64-netbsd-thread-multi/Storable.pm
line 383, <FILE> line 628, at
/blosxom-plugins/calendar line 322.

The solution, for the moment, was to simply delete the Calendar plugin’s cache (that’s what’s being referenced on line 322) and allow it to regenerate.

Why this keeps happening, I’m not sure. I hate to invest the effort to figure it out, when I may well be the only person left on the Internet using this particular combination of software, and the proper solution is clearly a migration to another platform.

But if you happen (by some chance) to run into this yourself, and this blog hasn’t crashed (or you’re reading somebody’s helpful cache of this page!), that is seemingly a temporary fix. A hacky fix would be to set up a cron job to periodically clear the Calendar plugin cache, which would at least put an upper bound on the amount of downtime that the problem can cause, but that’s … pretty ugly, even for me, and I’m largely immune to opposition to solutions on aesthetic grounds these days.

0 Comments, 0 Trackbacks

[/meta] permalink

Sun, 12 Mar 2017

Related to yesterday’s post about the AP article confirming that, in fact, modern cryptography is pretty good, there’s a reasonably decent discussion going on at Hacker News in response, with a mixture of the usual fearmongering / unjustified pessimism but also some very good information.

This post, by HN user “colordrops”, is particularly worth discussing, despite falling a bit on the “pessimistic” side of things:

It seems that most people are completely in the dark when it comes to security, including myself, but there are some principles that should be unwavering that regularly get ignored again with every new iteration of “secure” software:

  • If there is a weak layer in the stack, from the physical layer to to UI, then the system is not secure. Even if your messaging app is secure, your messages are not secure if your OS is not secure
  • If the source code is not available for review, the software is not secure
  • If you or someone you trust has not done a full and thorough review of all components of the stack you are using, the software is not secure
  • Even if the source code is available, the runtime activity must be audited, as it could download binaries or take unsavory actions or connections.
  • On the same note, if you do not have a mechanism for verifying the authenticity of the entire stack, the software is not secure.
  • If any part of the stack has ever been compromised, including leaving your device unlocked for five minutes in a public place, the software is not secure.

I could go on, and I’m FAR from a security expert. People compromise way too much on security, and make all kinds of wrong assumptions when some new organization comes out and claims that their software is the “secure” option. We see this with apps like Telegram and Signal, where everyone thinks they are secure, but if you really dig down, most people believe they are secure for the wrong reasons:

  • The dev team seems like honest and capable people
  • Someone I trust or some famous person said this software is secure
  • They have a home page full of buzzwords and crypto jargon
  • They threw some code up on github
  • I heard they are secure in half a dozen tweets and media channels

First, I have to take serious issue with the author’s use of “secure” as a sort of absolute. Thinking of “secure” as a sort of binary, is-it-or-isn’t-it state is only useful in the most academic corners of cryptography, where we can talk about an algorithm being “secure” against certain kinds of analysis or attack. It is bordering on useless when you get into the dirtiness of the real world.

Implementations are not “secure” in the absolute. Implementations may be secure within a certain threat space, or for a certain set of needs, but security is always relative to some perceived adversary. If your adversary has unlimited resources, then no implementation will ever be secure over a long timescale. (An ‘unlimited resources’ adversary will just build Dyson spheres around a few nearby stars and use them to power computronium bruteforce machines. Good thing you don’t really have an unlimited-resources adversary, do you?)

Security is all about tradeoffs. As you make an implementation more robust, it becomes more cumbersome to use. Computers have done really amazing things to make formerly-cumbersome security easier to use, but this tradeoff still exists and probably will always exist once you start talking about practical attacks.

The implementation standards for government-level security, e.g. the handling of classified information by the US DOD and similar, require electronically shielded rooms and specially vetted equipment to prevent information leakage at the endpoints. But as the last few years have demonstrated, these systems — while extremely impressive and well-constructed — have still leaked information through human factors compromises. So in that sense, anything that involves a person is arguably “insecure”. For most applications, there’s no getting around that.

Beyond that, though, the author does make some good points about users believing that a program is “secure” for the wrong reasons, including buzzword-laden webpages, unverified claims in the media, or endorsement by famous people who do not have a significant reputation in the IT security communtity at stake. These are all real problems that have been exploited to push poorly-designed software onto users who deserve better.

Many modern apps, including not only Telegram and Signal but also Facebook Messenger in its end-to-end encrypted mode, and various corporate email systems, are “secure enough” for particular needs. They’ll almost certainly hide what you’re doing or saying from your family, friends, nosy neighbors, boss (provided you don’t work for an intelligence or law enforcement agency), spouse, etc., which is what I suspect all but a very small fraction of users actually require. So, for most people, they are functionally secure.

For the very small number of users whose activities are likely to cause them to be of interest to modern, well-funded, First World intelligence agenices, essentially no application running on a modern smartphone is going to be secure enough.

As others on HN point out, modern smartphones are essentially “black boxes” running vast amounts of closed-source, unauditable code, including in critical subsystems like the “baseband”. One anonymous user even alleges that:

The modifications installed by your phone company, etc. are not open source. The baseband chip’s firmware is not open sourced. I’ve even heard of DMA being allowed over baseband as part of the Lawful Intercept Protocol.

There is, naturally, no sourcing on the specific claim about DMA over the cellular connection, but that would be a pretty neat trick: it would essentially be one step above remote code execution, and give a remote attacker access to the memory space of any application running on the device, perhaps without any sign (such as a typical rootkit or spyware suite would leave) that the device was tapped. Intriguing.

I am, personally, not really against intelligence agenices having these sort of capabilities. The problem becomes when they are too easy or cheap to use. The CIA’s stash of rootkits and zero-days is unlikely to be deployed except in bona fide (at least, perceived to be bona fide) national security situations, because of the expense involved in obtaining those sort of vulnerabilities and the sharp drop in utility once they’ve been used once. They’re single-shot weapons, basically. If some were to get their way and manage to equip every consumer communications device with a mandatory backdoor, though, it would be only a matter of time before the usage criteria for that backdoor broadened from national security / terrorism scenarios, to serious domestic crimes like kidnapping, and then on down the line until it was being used for run-of-the-mill drug possession cases. And even if you think (and I will strongly disagree, but it’s out of scope for this post) that drug possession cases deserve the availability of those sort of tools, in the process of that trickling-down of capabilities, it would also doubtless fall into the hands of unintended third parties: from the cop who wants to see if their wife or husband are cheating on them, to organized crime, to Internet trolls and drive-by perverts looking for nude photos. Such is the lifecycle of security vulnerabilities: it all ends up in the hands of the “script kiddies” eventually.

Nobody has found a way to break that lifecycle so far: today’s zero-days are tomorrow’s slightly-gifted-highschool’s tools for spying on the girl or boy they’re fancying in class. Intentionally creating large vulnerabilities — which is exactly what a backdoor would be — just means everyone along the food chain would get a bigger meal as it became more and more widely available.

The only solution, as I see it, is to keep doing pretty much what we’ve been doing: keep funding security research to harden devices and platforms, and keep funding the research on the other side of the equation (both in the private sector and via the IC) who try to pick away at it, and hope that the balance remains relatively constant, and similar to what we currently enjoy: enough security for the average person to keep their communications private from those they don’t want to share them with, impressively secure communications for those who want to put in the effort, but enough capabilities on the law-enforcement and intelligence side to keep organized crime and terrorism disrupted in terms of communications.

0 Comments, 0 Trackbacks

[/technology] permalink

Sat, 11 Mar 2017

Way back in 2013, I wrote about the NSA leaks and how I didn’t think that they signified any fundamental change in the balance of power between cryptographers and cryptanalysts that has been going on for centuries. It would seem that the New York Times has finally worked through their backlog and more or less agrees.

(The article in question comes from the AP, so if the NYT website doesn’t want to load or gets paywalled or taken out by a Trump Republic drone strike at some point in the future, you can always just Google the title and turn it up. Probably.)

The tl;dr version:

Documents purportedly outlining a massive CIA surveillance program suggest that CIA agents must go to great lengths to circumvent encryption they can’t break. In many cases, physical presence is required to carry off these targeted attacks. […] It’s much like the old days when “they would have broken into a house to plant a microphone,” said Steven Bellovin, a Columbia University professor who has long studied cybersecurity issues.

In other words, it’s pretty much what we expect the CIA to be doing, and what they’re presumably pretty good at, or at least ought to be pretty good at given the amount of time they’ve had to get good at it.

Which means that I was pretty much on target back in 2013, and the sky-is-falling brigade was not:

My guess […] is that there’s nothing fundamentally wrong with public key crypto, or even in many carefully-built implementations. It’s when you start optmizing for cost or speed that you open the door.

So: if you are very, very careful, you will still be able to build up a reasonably-secure infrastructure using currently available hardware and software. (‘Reasonably secure’ meaning resistant to untargeted mass surveillance, not necessarily to a targeted attack that might include physical bugging: that’s a much higher bar.) However, some code may need to be changed in order to eliminate any reliance on possibly-compromised components, such as hardware RNGs / accelerators that by their nature are difficult to audit.

In the past few years, most widely-used crypto libraries have moved away from hardware PRNGs that are thought to be suspect, and generally taken a less seat-of-the-pants approach to optimizing for speed than was previously (sometimes) the case. For security, this is largely a good thing.

In terms of intelligence-gathering capability, it’s presumably a degradation vs. the mass-intercept capability that certain agencies might have had when more traffic was unencrypted or poorly-encrypted, but it was foolish to believe that situation was going to go on forever. End-to-end crypto has been a goal of the pro-security tech community (formerly and now cringingly referred to as “cypherpunks”, back when that seemed like a cool name) for almost two decades, and would have happened eventually.

The IC still has significant tools at its disposal, including traffic analysis and metadata analysis, targeted bruteforcing of particular messages, encrypted content, or SSL/TLS sessions, endpoint compromises, human factors compromise, and potentially future developments in the quantum cryptography/cryptanalysis space. Without defending or minimizing Snowden et al, I do not think that it means the end of intelligence in any meaningful sense; those predictions, too, were overstated.

Anyway, it’s always nice to get some validation once in a while that the worst predictions don’t always turn out to be the correct ones. (Doesn’t quite make up for my hilariously blown call on the election, but at least I wasn’t alone in that one.)

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 10 Feb 2017

I realized today that this blog had crashed, again. It’s becoming obvious that I’m going to need to find some other hosting / server-side engine solution, because the downtime and associated maintenance (mostly due to Perl’s constant non-backwards-compatible changes, from what I can tell) are really not working for me anymore. I will try, as a Good Internet Citizen, to keep all the posts up and keep the URLs stable, but who knows what will happen. I may well just render the existing blog corpus to static HTML pages and call it quits. It has been a fairly good run at this point.

I’m already getting penalized by Google for not being “mobile friendly” enough, which I think I’ve decided I don’t really care about (in fairness, the blog pages look mostly OK on my Android device), but it’s yet another task that one has to do now if you want to put content online. The bar has gotten a lot higher from when I started this thing, at least it seems.

Anyway: the culprit today was a corrupted cache file used by the Blosxom Calendar plugin. Chasing this down was made more complicated than it needed to be, because by default the Calendar plugin stores its cache in a dotfile, which most of my other plugins don’t. No clue why; that seems like slightly unfriendly behavior.

I tweaked the configuration a bit so that the file is now regularly visible. Anyone else using Blosxom+Calendar may or may not be interested in the change.

0 Comments, 0 Trackbacks

[/meta] permalink