Kadin2048's Weblog
JulAug Sep
Oct Nov Dec


Fri, 30 Nov 2007

I made an unwitting discovery earlier this week regarding Apple Mail and its built-in S/MIME functionality, when used in combination with Sen:te Software’s free (and excellent!) GPGMail: for reasons that I can’t quite figure out so far, if you send an encrypted S/MIME message and also sign it using GPG (OpenPGP style, not ASCII-armored), the resulting message will be corrupted and unreadable by the recipient.

I verified this using Apple Mail 2.1 and GPGMail 1.1.

Note that it’s perfectly okay to send a message that’s signed both ways, and I do this frequently. Both signatures will verify on the other end (assuming nothing gets mangled in the mail system). The problem just seems to occur if you try to encapsulate an OpenPGP signed message inside an S/MIME encrypted one.

So far I haven’t tested the reverse, an S/MIME signed message encrypted as OpenPGP and sent that way, because many more of my correspondants use S/MIME than GPG.

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 23 Nov 2007

As a result of this Slashdot FP, I spent a fair bit of time this afternoon reading up on “Permissive Action Links” or PALs. PALs are the systems which prevent the unauthorized use of nuclear weapons in the U.S. and allied arsenals; they’re the real versions of the ‘arming code’ devices that Hollywood loves so much.

Steven M. Bellovin, a professor at Columbia University in the Computer Science department, has a fascinating page on the topic, excellent not only for its analysis but for the depth of the material it references.

PALs are interesting because they (hopefully) represent the most extreme, highest-stakes use of both physical and electronic security measures. However, in reading about them, it’s easy to see parallels to more mundane scenarios.

Bellovin quotes from Assuring Control of Nuclear Weapons:

There are two basic means of foiling any lock, from an automobile ignition switch to a PAL: the first is to pick it, and the second is to bypass it. From the very beginning of the development of PAL technology, it was recognized that the real challenge was to build a system that afforded protection against the latter threat. … The protective system is designed to foil the probes of the most sophisticated unauthorized user. It is currently believed that even someone who gained possession of such a weapon, had a set of drawings, and enjoyed the technical capability of one of the national laboratories would be unable to successfully cause a detonation without knowing the code.

Does this sound familiar? It should: you could just as easily be describing the hardware design goals of a DRM system like AACS. And why shouldn’t it? A PAL really is just a high-stakes DRM system. The point is to allow access by authorized users who possess a code, while denying others, even if the people you want to deny have access to the whole assembly.

Based on the declassified information available, the PAL consists of a tamper-resistant ‘secure envelope’ or ‘protective skin,’ into which certain arming components are placed. This envelope can be thought of both as a physical and a logical region. It protects the components inside against both physical tampering and remote sensing (X-rays, etc.), as well as informational attacks (brute forcing of the key code, replay attacks); a breach of the envelope results in irreversible disabling of the device. The inputs and outputs from the secure envelope are carefully designed according to the “strong/weak link” principle:

Critical elements of the detonator system are deliberately “weak”, in that they will irreversibly fail if exposed to certain kinds of abnormal environments. A commonly-used example is a capacitor whose components will melt at reasonably low temperatures. The “strong” link provides electrical isolation of the detonation system; it only responds to very particular inputs.

Strong and weak links need not be electromechanical; one could envision similar constructs in modularized software, for instance. In fact, most of the basic concepts of tamper-resistance can be envisioned both literally (as hardware systems; sealed boxes full of pressure and X-ray sensors) and abstractly (modules and their handling of exceptions).

PALs are interesting because they represent the logical conclusion of tamper-resistance systems. It’s my view that if you look at the direction that commercial content-protection systems are going in, and the subsequent cat-and-mouse games with the hacker/cracker community, consumer electronics will begin to increasingly include PAL-like tamperproof elements in the future. Thus, a conceptual understanding of PALs might be exactly the sort of knowledge you’d want to acquire, if your desire was to end-run the inevitable (in my view, given the current climate) next generation of DRM hardware.

Of course, the obvious downside of this is that the same research that a relatively innocent hacker might conduct into the avoidance or circumvention of annoying DRM systems, might also be the same sort of knowledge that you’d need to circumvent the PAL on a nuclear weapon. Regardless of whether this is a valid national security threat, it’s exactly the sort of justification you’d want if your goal was to quash research that was threatening your business (or political-contribution) model. Given the not-infrequent collusion between the industries that benefit from DRM and the government (also c.f. The Attraction of Strong IP by yours truly), I’m not sure this is as farfetched as it might sound.

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 09 Nov 2007

For the last few days I’ve been fiddling around, trying to solve an annoying problem with an OpenBSD machine (running OBSD 4.1) and my Mac. Basically, what was happening is that when I connected to the OBSD box via a terminal on the Mac, using either Apple’s Terminal.app or the freeware iTerm, after quitting Emacs, all the text subsequently written to the terminal would have the same background color as was set in Emacs. It was as though Emacs just refused to unset the terminal’s background color on exit.

I sought help from the comp.unix.bsd.openbsd.misc newsgroup, and they didn’t let me down. I got two very helpful responses, from Thomas Dickey and Christian Weisgerber: both suggested that it was my choice of ‘xterm-color’ as a terminal type that was to blame.

Changing the TERM setting on my OBSD machine (in “.profile”) to ‘xterm-xfree86’ seemed to do the trick, and now I get a nice colorized Emacs, and it drops back cleanly into the Terminal’s defaults on exit.

Thomas Dickey also gave a link to a terminfo database that’s significantly more up to date than the default one included with OpenBSD; it’s available via FTP at his site here. However, even in his version there’s no specific termcap entry for Apple’s Terminal.app; the best fit still seems to be xterm-xfree86.

The one caveat of all this is that the cursor color seems to be unaffected by whatever’s specified in the “.emacs” configuration file when you connect via a remote (SSH) terminal, so it makes sense to choose an Emacs color scheme that’s similar to your terminal default (or else you may end up with a black cursor on a black background).

0 Comments, 0 Trackbacks

[/technology] permalink

Thu, 08 Nov 2007

If you use a Mac, you may have at some point saved a pointer to an interesting page by dragging its ‘favicon’ (the little icon that sits to the left of a page’s URL in the URL bar in most browsers) to the Finder, which creates a neat little file.

I’d also been doing this, and blindly assumed that the files created in the Finder were standard “.url” files — basically nothing but an ASCII text file containing the page’s address. However, they’re not.

As a quick peek in the Get Info window will show you, they’re actually .webloc files. They’re actually somewhat more complex than basic .url’s.

For starters, the data is XML, formatted as a “PLIST” (one of Apple’s favorite XML schemas). The .webloc file for “http://kadin.sdf-us.org” is shown below.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN"
<plist version="1.0">

This is the case in Mac OS 10.4, at least. With previous versions of the OS, it seems that the URL data might have been contained in the file’s resource fork instead, or in addition to, the XML PLIST.

Although normally I’d berate Apple here for ignoring an established de facto standard (the .url file) that works well, the .webloc format is interesting, because it’s easily extended. You could, for instance, encapsulate not just the URL of a page, but its entire HTML contents, or an MD5 hash, into the .webloc, if you wanted to. And, of course, it’s UTF-8 rather than ASCII (and it makes it clear that it’s UTF-8, rather than leaving the determination up to the user’s application), so it has obvious localization advantages.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Mon, 05 Nov 2007

Email encryption is a topic that comes up frequently in both technical and privacy circles. Pretty much everyone with any sense agrees that it would be a good thing — or at least a better situation than we have right now — if encryption was more widespread and not limited to geeks and the occasional criminal, but exactly how to get encryption into the hands of the masses in a usable form remains a challenge.

One of the problems is that most email-encryption products that offer end-to-end privacy (as opposed to simple transport-layer privacy, like SSL) are designed as part of a traditional desktop MUA, and many people are moving away from POP-based email and desktop MUAs in favor of server-stored messages and webmail.

This presents a problem, since without a desktop MUA, it’s not clear where the encryption/decryption logic will live. Some schemes in the past (e.g. HushMail, at least based on my understanding of how it works) that offer ‘encrypted webmail’ do the message encryption on the server, relying on transport-layer security to get the message to and from the user’s web browser.

This approach is seriously flawed: it requires that the user trust the webmail provider, something I think they probably should be wary of doing. (After all, the webmail provider may not be ‘evil,’ but almost certainly has priorities that are different from those of any randomly-chosen individual user.) Once you send your unencrypted message off to the server, even if it’s via SSL, you really have no idea what becomes of it or who can read it.

For real security, you need to encrypt the message before you let it out of your sight. What’s needed is something that combines the security of end-to-end encryption and client-side logic, with the convenience of webmail.

Naturally, I’m not the first person to have gone down this path. Herbert Hanewinkel, of Hanewin.net, even has a nice example implementation of GPG encryption in Javascript, under a freely-modifiable and re-distributable license. With it, you can plug in a public key, type some text, and have it encrypted for that key, all right in your browser. As he points out, this has several advantages:

  • All code is implememented in readable Javascript.
  • You can save the page and verify the source code.
  • No binaries are loaded from a server or used embedded.
  • No hidden transfer of plain text.

As-is, this is a nice way to submit forms (he has a contact form on his site that encrypts the message with his public key and sends it); combined with a matching decryptor, it could be the basis for a secure webmail system that doesn’t require the user to trust their ISP or the mailserver operator. (Sort of, anyway: the user would have to be constantly vigilant that the JS applet that they were being sent was the real thing…)

John Walker at Fourmilab.ch has a more generalized version called Javascrypt that does both encryption and decryption. (Hanewinkel’s encryptor seems to be based on Walker’s, but includes some performance enhancements.) His page also has a nice summary of the benefits of browser-based cryptography and some of its weaknesses and vulnerabilities.

While it would be nice if Google built a JavaScript implementation of GPG into its next version of Gmail, I’m not going to hold my breath (for starters, it would make their business model — basically data-mining all your stored messages — impractical). But I don’t think it would be too difficult to take the examples that are around right now, and work them into some of the more common OSS webmail packages.

1 Comments, 0 Trackbacks

[/technology/web] permalink

Fri, 02 Nov 2007

A few years ago I typed up a fairly substantial document, in response to what I perceived as a lot of general ignorance concerning the origin of the “right to privacy” in the United States. Although jurisprudence is not my profession, it’s something of an interest of mine, and I tried to sum up a few of the major cases and issues involved. At the very least, my hope is that it will give the lay reader an appreciation for any upcoming Supreme Court cases, or at least allow them to hold their own in a conversation.

My original version was written in late 2002, and never read by anyone but myself. I recently updated it to cover the biggest development between then and now (Lawrence v. Texas, in 2003), and now I’m tossing it online. Please be aware: this is for basic education/entertainment only — it’s not a scholarly work and you certainly shouldn’t cite it anywhere.

It’s available as a Markdown-formatted UTF-8 text document, and minimally-formated XHTML. It’s licensed CC-BY-SA.

1 Comments, 0 Trackbacks

[/politics] permalink

Thu, 01 Nov 2007

Since I think the probability that anyone out there actually reads this thing is fairly low, at least right now, I haven’t bothered to do much in the way of making it easy for readers to contact me. I realize this is semi-obnoxious behaviour, and I’m working to fix it.

I so far have hesitated from putting an email address up because I know it would just become flooded with spam anyway, and because people coming to this site from Slashdot.org can already get an email address for me fairly easily there, and SDF members can simply send me an email address within the system.

But just in case there’s anyone out there who isn’t a member of those two groups, and would like to drop me a message, here’s a ROT-13 encoded address you can feel free to use: “oybt1.xnqva@fcnztbhezrg.pbz”. (Yes, it’s a Spamgourmet address.)

In the very near future, I may set up comments here on the blog. If all goes well and I don’t get too inundated with spam, that will probably be the best way for random passers-by to comment or respond, should they want to.

2 Comments, 0 Trackbacks

[/meta] permalink