Kadin2048's Weblog
2017
Months
JulAug Sep
Oct Nov Dec

RSS

Tue, 02 Dec 2008

Yesterday, I finally got around to upgrading my home server from Ubuntu 6.06 LTS (aka ‘Dapper Drake’) to the latest “long-term support” release, 8.04 LTS (‘Hardy Heron’).

Pretty much everything went according to plan. Since my server is headless, I was a bit nervous about the whole thing — having to attach a monitor and keyboard to it would have been a major problem. But this turned out to be unwarranted; the whole procedure was quite smooth.

The only issue I did run into was the dreaded “can’t set the locale; make sure $LC_* and $LANG are correct” problem, after I rebooted. This is a very common issue, and if you’re a Linux or BSD user and you haven’t run across it yet, chances are at some point you will. A quick search using Google will turn up hundreds of people looking for solutions.

Unfortunately it’s a nasty issue because there are many reasons why it can happen. In my case, none of the solutions suggested in most forum posts (run dpkg-reconfigure locale, check locale -a, etc.) worked. However, I did notice that when I looked at the current values of $LANG and $LC_ALL, they were incorrect.

In particular:

$ echo $LANG
en_US

This is wrong. The correct locale specifies a text encoding, so a proper value is en_US.UTF-8, not just en_US.

Unfortunately, it took me a long time to figure out where to set this value. Throwing it into my .bashrc would have solved the problem when I was logged in and running things as my user, but it wouldn’t have prevented it from cropping up when the root user’s cron tasks ran automatically (which results in me getting sent error emails every few minutes; pretty annoying).

What I wanted was to set LANG=en_US.UTF-8 as a global variable for the entire system, for all users, all the time, whether running interactively or not. In order to do this, the file /etc/environment must be edited. This file holds global variables that apply to the entire system: typically just the locale and a bare-minimum PATH.

To /etc/environment I added (the first line was present but specified “en_US”):

LANG="en_US.UTF-8"
LANGUAGE="en_US.UTF-8"
LC_ALL="en_US.UTF-8"

In order to get this to take effect, I had to restart all my open shells, including a few instances of screen. However, it made the problems go away.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Sat, 29 Nov 2008

Roubini Global Economics (RGE) has an interesting article called “Can China Adjust to the US Adjustment” that discusses, among other things, the relationship between foreign trade and the current credit ‘crunch.’ It’s a fascinating article both because of the interesting history it describes, and some of the predictions it makes.

Although we can’t say for sure, it is probably safe to argue that US savings rates will climb back to earlier average levels, or even temporarily exceed those levels, as American households rebuild their shattered balance sheets. If they return only to the mid-point of earlier savings rates, this implies that US household savings must rise by some amount equal to roughly 5% of US GDP, or, to put it another way, that all other things being equal US household consumption must decline by that amount.

Although it may just be that I haven’t been paying close enough attention, this is the first time I’ve seen anyone toss out a hard number estimate of how much they expect consumption to fall by. Pretty much everyone expects consumption to fall by some amount, but ‘how much’ is the real issue.

This decline — whatever it ends up being — will inevitably cause a decrease in China-to-U.S. imports, and that will have to be compensated by either an increase in domestic Chinese consumption, or a decrease in production. The article suggests, and I agree, that the former is highly unlikely. Although Chinese household spending is on the rise, there is just no way that it will rise fast enough or high enough to maintain the insane level of consumption that was until recently being bankrolled by the U.S. Hence, production there must fall.

Of course, falling production means factory closures and job loss, and that means domestic consumption will fall yet further, leading to a nasty downward spiral. The parallels drawn in the article between 1929 in the U.S. and 2008 in China seem pretty well-grounded; except, of course, that in 2008 Chinese regulators have volumes of economic theory and analysis written about 1929 at their disposal, if they choose to make use of it.

[Via MetaFilter.]

0 Comments, 0 Trackbacks

[/finance] permalink

Wed, 05 Nov 2008

Edward Wright, a British Conservative, has an interesting piece about the future direction of the Republican party here in the U.S., full of suggestions that the party leadership would do well to take to heart. There are many parallels between the defeat of the Tories in 1997, and what happened yesterday; both lost the trust of the public after economic turmoil, and both had spent too long drinking their own Kool-Aid while neglecting their stated reason for existing.

It is when parties deviate from their fundamental intellectual core that they suffer the most. The most important example of this in the current administration is public spending. Whilst tax cuts helped to keep the American economy growing their pre-requisite — low public spending — was ignored. It’s harder to demonise big government liberals when you have spent eight years turning a healthy budget surplus into a massive deficit, a deficit which represents a massive tax burden on future generations in the form of interest payments to Chinese bankers.

In Britain the ideological departure had serious underpinnings and serious consequences. The pragmatic conservatism of the previous 150 years was eschewed in exchange for the dynamic monetarism, privatisation and market liberalisation of the Thatcher revolution. To succeed once more the GOP must rediscover its own ideological core, an ideology that is found not in the anti-intellectual city-dweller baiting of Sarah Palin but in integrity in government, individual freedom and not just low taxes but low spending.

It is difficult, as a small-government Libertarian conservative, to find much of a silver lining in yesterday’s election; not only does it bring us dangerously close to one-party rule — just two Senate votes, at the time of writing, and that only if Senatorial filibuster rules are not changed — but it seems destined to lead to yet more government interventionism. About the only positive aspect of it that I can find, is that it might represent the death knell of the far-right, authoritarian “conservatives” that have monopolized the GOP brand for too long.

The ‘Evangelical Right’ should have always been the party’s fringe, not its core; by making it the latter, Republican leaders virtually guaranteed yesterday’s outcome sooner or later. The far-right just isn’t socially mainstream enough to form the core of a majority political party. That the strategy worked for as long as it did is remarkable, but — perhaps thankfully — it has found its limit.

The much-ballyhooed ‘silent majority’ was willing to nod along with social authoritarians — men and women who seemed more interested in what was going on in their neighbors’ bedrooms than in Wall Street boardrooms — so long as the economy was humming along and we were winning wars abroad. But once that ended, so too did the public’s tolerance for politicians who had built their careers obsessing over irrelevancies. And let’s be clear: to all but a hard core of religious conservatives, when Wall Street is melting down, concerns over fetuses and buggery are worse than irrelevant.

The question now is whether the Republican party will pull itself together in time to save the country from sliding disastrously far to the left. They have two years in which they must formulate a new message, or at least rediscover an old message that they seem to have forgotten, and take that message to the public, before mid-term elections. I sincerely hope that they can do it, because as bad as the two-party system is, a one-party system — which is what we’re looking at if the Republican party doesn’t adopt a ‘big tent’ platform very quickly — would be far worse.

0 Comments, 0 Trackbacks

[/politics] permalink

Tue, 04 Nov 2008

One of my favorite uses for Google Reader is to keep an eye on the feed of latest releases from Project Gutenberg and Distributed Proofreaders. Although the great majority of what they archive is of limited interest (at least to me), every once in a while something really cool comes through.

“Successful Stock Speculation,” (HTML version) originally published in 1922, is worth a read. Written as a handbook for the novice investor, just about everything in it is still good advice. I suspect that if more amateur (by which I mean, using their own, as opposed to someone else’s, money) speculators stuck to the very conventional wisdom in the book, far fewer people would lose their shirts in the process.

[The word ‘speculator’] refers to a person who buys stocks for profit, with the expectation of selling at a higher price, without reference to the earnings of the stock. … An investor differs from a speculator in the fact that he buys stocks or bonds with the expectation of holding them for some time for the income to be derived from them.

This is an important distinction which, sadly, seems to have been lost recently. There’s a fundamental difference between speculation and investment; people who buy and hope for a change in price of the underlying asset, so that they can sell and realize a profit, are not investing. They are speculating. This is true whether the underlying asset is stocks, bonds, or real estate. I’d argue that misunderstanding the difference between the two is the root cause of virtually all market ‘horror stories’ (Grandma and Grampa wiping out their retirement, the kids’ college fund, etc.).

As a usual thing, it is a good time to buy stocks when nearly everybody wants to sell them. When general business conditions are bad, trading on the stock exchanges very light, and everybody you meet appears to be pessimistic, then we advise you to look for bargains in stocks. […] When business is bad, nearly everybody thinks business will be bad for a long time, and when business is good, nearly everybody thinks business will be good almost indefinitely. As a matter of fact, conditions are always changing. It never is possible for either extremely good times nor for extremely bad times to continue indefinitely.

Sound familiar? There’s nothing new here; you could read practically the same thing in just about any Warren Buffett book. But the fact that it still works so well, so unusually well in fact, is a testament to how many people just don’t seem to get it.

The same could be said for the book’s advice on picking which stocks to buy:

We maintain that there is only one basis upon which successful speculation can be carried on continually; that is, never to buy a security unless it is selling at a price below that which is warranted by assets, earning power, and prospective future earning power.

Today we might call this “intrinsic value” or “fundamental value,” and this school of thought ‘Value Investing’ (although, for the reasons noted above, chances are it’s not really “investment” but rather ‘value-driven speculation’).

Although other strategies seem to get all the attention during bull markets, it’s the unsexy science of fundamentals that has seemingly withstood the test of time, and is still as useful today — regardless of what the future might hold — as it was in the early 20s.

0 Comments, 0 Trackbacks

[/finance] permalink

I was heartened to read this over at Calculated Risk earlier today. It’s mainly a link to a WSJ article, but the punchline is blunt:

Any attempt to keep house prices artificially high will just postpone the inevitable and delay the eventual recovery.

At least somebody seems to get it. Pity that ‘somebody’ doesn’t seem to include, oh, anybody in Washington. At least not yet, but the gist of the article is that the truth is beginning to sink in.

Unfortunately I think it’s too late for that truth to have prevented a costly and probably pointless bailout, but it might be in time to prevent too much meddling in the retail real estate market. Of all the markets that need a good cleansing burn, that’s it. However, it’s also the one prone to attracting the most interference from Congress, as idiots who forgot that a house should be a place to live first — and not an ATM or an IRA — squeal and moan as they learn that actions have consequences.

The most dangerous idea to creep in is that a decline in housing prices is, by itself, somehow bad. Whenever you see someone in a suit pointing to the decline in prices and suggesting that it is a problem to be solved, be afraid. It’s dangerous for two reasons: one, because it’s wrong — inflated housing prices were a symptom of the credit bubble, and their decline is quite natural as that bubble works itself out; two, because it’s an easy target for government intervention.

There’s nothing politicians like better than treating the symptoms of a problem; it’s so much easier, after all, than actually going after the root cause, and most of the time the public never notices the difference.

If the government succeeds in convincing the public (and Wall Street, who on the whole haven’t shown themselves to be much more savvy than the public at large anyway) that the decline in prices is a symptom that ought to be treated, and somehow find a way to prop those prices up at their inflated levels, a generation of financially responsible Americans will be effectively locked out of home ownership. I really can’t imagine anything more toxic to the long-term faith of the public in the markets than that.

0 Comments, 0 Trackbacks

[/finance] permalink

Sat, 27 Sep 2008

Although there are lots of ways to hedge against a fall in the U.S. dollar, I was inspired by this post to look at four funds in particular that seem like promising, low-cost ways for the small investor to flee the dollar, if they desire:

  • FXE
  • MERKX
  • PSAFX
  • VGPMX

Starting from the last, VGPMX is a precious metals mining fund operated by Vanguard. Currently it’s closed to new investors, so you’re S.O.L. if you don’t already own shares. It’s sort of the thinking man’s alternative to actually buying gold; instead of buying the gold directly, it invests in companies that mine gold and other precious metals, and which tend to be worth more when gold is high. It would have been a good buy a few years ago, but such is hindsight.

Next up is PSAFX, the “Prudent Global Income Fund”. According to Bloggingstocks.com, it “holds mostly fixed-income securities denominated in foreign currencies. Roughly 70% of its investments were in foreign debt at the end of the third quarter, with the euro, Swiss franc, and Canadian dollar receiving the largest allocations. […] the fund concentrates on the highest-rated debt, such as government securities. And as an extra dollar hedge, 11% of its assets were recently in gold and gold shares.” Basically, it’s very similar to a “money market” fund that you might buy into through your bank or credit union, but with a more diverse set of underlying assets. (Most ‘money market’ accounts invest only in U.S. government or municipal debt.) The gold is sort of a bonus, if you like that sort of thing. Expense ratio totals to 1.28% according to Google.

The Merk Hard Currency Fund (MERKX) is similar, investing in very high-grade government debt in a variety of “hard” currencies. According to Merk’s website, it’s almost 40% Euro, 17% Swiss Franc, 17% Canadian Dollar, and 9% gold. The remainder are other currencies in smaller chunks. Their page provides a good breakdown of assets and sectors, so I won’t duplicate it here, but the biggest government debt is German, followed by Canadian and Swiss, followed by cash and gold. It’s an interesting option to be sure. The minimum investment is $2500 to play, $100 buy-ins once you’ve got it opened. (IRA minimum is $1k.) The net expense ratio listed in the prospectus is 1.3%.

But what if you want to just hold cash, rather than debt instruments? In that case, the CurrencyShares Euro Trust (FXE) might be more interesting. It’s essentially like owning shares in a very big Euro savings account. The funds are actually kept in accounts (one interest-bearing and one non-interest-bearing) with JPMorgan Chase’s London branch, and the fund generates some profit this way, although it’s less than the debt-based funds.

Owning FXE is, at least as far as I can tell, the closest that a small investor can get to opening a Euro-denominated savings account and putting cash in there, without actually going through the hassle of setting up a Euro-denominated account. About the only downside I can see is, because your funds are housed in two giant accounts with JPMorgan, you’re probably not protected by European deposit insurance in the event of a bank failure. (European deposit insurance is a bit of a patchwork at the moment at the moment, too, so it’s not really clear what would happen if things went south.)

Although you buy FXE just like any other ETF, it’s technically not a mutual fund or a commodity, nor is the operator an Investment Company. The ‘fund’ is actually a Trust, and according to the prospectus, “The Shares represent units of fractional undivided beneficial interest in, and ownership of, the Trust.” Although you buy the shares from a company called ‘Rydex Investments’, they’re not the trustee — that’d be the Bank of New York; Rydex just does the paperwork and marketing. There are some fairly ominous-sounding paragraphs in the prospectus detailing the circumstances under which the whole arrangement could be liquidated — possibly to the detriment of investors — and bears reading, although it doesn’t seem like a major risk. (At least not more than anything else these days.)

When all this legal hand-waving — I counted four legal entities to operate it, located in such diverse locales as Delaware, Maryland, New York, and London — is done, the overall effect is that you, the buyer, get an interest rate equal to EONIA less 27 basis points (0.27%), and an expense ratio of 0.40%, on top of whatever the EUR/USD happen to do.

These funds piqued my interest, so I’ll probably be doing some additional research over the next few days and reporting the results. The usual common-sense disclaimers apply to all the information here: it’s not investment advice, get a real advisor if you’re investing real money, don’t sue me if you lose your shirt.

0 Comments, 0 Trackbacks

[/finance] permalink

Fri, 05 Sep 2008

One of the strengths of the Internet and general and the Web in particular is the ease with which it lets an individual set up a site or page on a subject of interest to them, and share it with the world. Although that low barrier to entry opens the flood-gates to thousands of blogs (like this one), it also allows for vast amounts of information to be published on niche subjects by people who are truly passionate about them.

Leadholder.com is a perfect example of this in action. It’s a wonderful site — well-designed, easy to navigate, brimming with information — on a topic that I suspect most people would never cross paths with: a utilitarian drafting and drawing implement called the ‘lead holder.’

Now, I admit I find this sort of thing fascinating — I have an admitted weakness for precision tools in general, and drafting tools most of all — but even if you don’t share quite the level of interest in the subject matter that I do, it’s still a cool example of one of the greatest strengths of the web.

[Found via MeFi’s YoBananaBoy.]

0 Comments, 0 Trackbacks

[/technology/web] permalink

Sun, 31 Aug 2008

[Originally posted Friday 29 Aug 2008; corrected to fix formatting and typos.]

While flipping through the latest issue of “Cabling and Installation Maintenance” magazine (setting aside all questions of taste in reading material), I noticed a fairly neat product: CATV coax to Category 7A patch cables.

Apparently, the new (draft) ISO/IEC Cat 7A cabling standard has so much available bandwidth — supposedly useful to more than 1GHz — that you can run analog cable TV over it without anything more than a simple balun to convert the 75-ohm unbalanced coax connection to the 100-ohm balanced one used by Cat 7. This isn’t IPTV or digital compression, it’s just running the analog RF signal right over the balanced network wiring.

That’s pretty impressive — in comparison, Cat 5e UTP wiring is only useful up to around 100MHz, and Cat 6 up to 250. And it opens up some neat possibilities for home wiring. Rather than having to decide which rooms you want to run coax to for cable TV, and which rooms to run Cat 5/6 to for data and phone, you could just run one type of cable everywhere. If you want cable TV, just hook it up (in the wiring closet / basement) to the incoming cable line; if you want data, plug it into a switch; if you want POTS, into a punchblock.

Having just spent far too much time screwing around with home wiring, that sounds like a pretty nice proposition.

0 Comments, 0 Trackbacks

[/technology] permalink

I decided to do a little playing around earlier this weekend with Python and CGI scripts. Just for something to do, I kludged together a little comment form for this site. It’s not yet operational — I still haven’t figured out how to get ReCaptcha working via a CGI here on the SDF — but it hopefully will show up some day.

Anyway, I ran into a weird issue when trying to write to an “mbox”-format mail spool file using Python. Basically, rather than actually sending email from within my CGI script, I instead just wanted to take the user’s form input and write it to an mbox-style spool file somewhere on the filesystem, for later perusal using an MUA.

In theory, this should be fairly simple. Python comes with a standard library called “mailbox” that’s purpose-built for working with a variety of spool/mailbox file types, and can add messages to them with ease. Unfortunately, I can’t seem to get it to work right; specifically, the message envelope delimiters don’t seem to be getting written correctly.

In an mbox-format spool file, each message is delimited by a string consisting of a newline, the word “From”, and a space. What comes after the word “From” isn’t really that important, but typically it’s the actual ‘From’ address followed by a timestamp. The crucial part in all this is that, with the exception of the very first message in an mbox file, the delimiter line that begins each message must be preceded by a blank line.

In other words, when writing new messages to an mbox file, you need to always start by writing a newline, or else you need to be religious (and check for the presence of) about ending the text of each message with no less than two newline characters, in order to guarantee a blank line at the end. (According to the Qmail docs, the blank line is considered part of the end of the preceding message, rather than part of the ‘From_’ delimiter.)

Supposedly, when you use Python’s mailbox.mboxMessage class in conjunction with mailbox.mbox to create message objects and write them to a file, this should all be handled. However, it doesn’t seem to be working for me.

The code looks something like this (similar lines removed for clarity):

mailmsg = mailbox.mboxMessage()
mailmsg['To'] = 'Kadin'
mailmsg['From'] = formdata['from'].value
# Other headers removed...
mailmsg.set_payload( formdata['message'].value )

mboxfile = mailbox.mbox('/tmp/'+str( datetime.date.today() )+'.mbox',factory=None,create=True)
mboxfile.lock()
mboxfile.add(mailmsg)
mboxfile.unlock

From my reading of the documentation and some similar code samples, this should produce a correctly-formatted mbox file — but it doesn’t. Instead, it produces this:

From MAILER-DAEMON Sun Aug 31 06:48:30 2008
To: Kadin
From: Testuser
Subject: FORMMAIL:Test Subject
Date: Sun Aug 31 02:48:30 2008
Reply-To: test@test.example

Test message would go here.
From MAILER-DAEMON Sun Aug 31 06:48:46 2008
To: Kadin
From: Testuser2
Subject: FORMMAIL:Test Subject 2
Date: Sun Aug 31 02:48:46 2008
Reply-To: test2@test.example

Another message would go here.

Notice that there’s no empty line between the two messages? That means that when the mbox file is parsed by most applications, they don’t see all the messages in the box. Instead, they simply assume that (since there’s no valid delimiters) there’s just one really long message, and display it as such.

While I think I might be able to fix this by just adding a couple of newlines onto the entered text before it gets incorporated into the message object’s payload, that doesn’t seem like how things should have to work. Unless I’m just misunderstanding the mbox format (there are enough varieties of it, so it’s possible), it doesn’t seem like that ought to be required.

Most likely, I’m doing something wrong, but I can’t seem to figure out what … time to throw in the towel and come back to it tomorrow.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Sun, 24 Aug 2008

For the past several weeks, ever since moving into a new apartment, I’ve been racking my brain (and amassing a vast array of new drill bits) trying to figure out how to wire it for data. After some bad experiences trying to get MythTV to stream MPEG-2 video acceptably over 802.11g, I was convinced that the only network worth having was one built on good old UTP.

In the old apartment, I’d managed to successfully run Ethernet cabling from floor to floor and room to room without a lot of destruction or (almost equally importantly) visibility, by running it through the air ducts, along the plumbing, and through carefully-bored holes in closets and crawl-spaces. Unfortunately, none of the tricks that worked in previous places got me anywhere in the new one. All the ductwork has long horizontal runs and mysterious corners; the plumbing is sealed behind walls; there’s no attic or unfinished basement to run through … it’s just generally not friendly to guerilla networking projects.

In desperation — more than a week of MythTV-less existence was not winning me any friends — I started researching power-line and phone-line networking as alternatives to actually running new cable. Quite a lot has developed since the last time I ran a home network over phone wires (when Farallon’s PhoneNet was high tech), and there are quite a few options available.

The first decision to make is which medium you want to run data over: power lines, phone lines, or coax. Each has advantages and disadvantages.

Cable TV coax provides a high-quality medium for data transmission, but in many homes and apartment buildings that were constructed before cable TV became the norm, coax may only run to one or two locations. Also, the dominant standard for home networking over coax, HomePNA 3.0, supposedly doesn’t coexist with DOCSIS cable modems. That was enough to scare me away, since the last thing I want to do with my home LAN is interfere with my only Internet connection option.

The next-best option for a wired home LAN would seem to be phone wiring. HomePNA is also the dominant standard there, although you could probably cobble something together with VDSL equipment if you could get the gear. Unfortunately, I didn’t find many models available after filtering out the older HomePNA 1.0 and 2.0 devices, which are too slow to really compete with 100BT on Cat5. Apparently the dominant distributor of HomePNA chipsets, CopperGate, is focusing their attention mostly on integrating the technology into IPTV STBs and FiOS gateways. This was one of the few standalone HomePNA-to-Ethernet bridges that I found for sale, and at $83 per unit they’re not cheap.

The other option, and the least elegant in my opinion, is running data over the AC power lines throughout the house. Although they can be prone to creating RF interference, and can have widely varying performance even between different rooms in the same house (or even separate outlets in the same room), they do offer data communication over a basically ubiquitous medium. They’re also some of the easiest devices to find — I found them for sale both in the local computer warehouse (MicroCenter) and Best Buy.

Unfortunately, not all power-line networking devices are created equal. Over the years there have been several (mutually incompatible, naturally) attempts at producing a dominant data-over-mains standard, several of which are available:

  • HomePlug 1.0

The HomePlug 1.0 standard operates at 14Mb/s and was an attempt to reduce the number of incompatible vendor-specific protocols that were proliferating a few years ago, before WiFi took off. HomePlug 1.0 devices are available from quite a few vendors, although not all of them mark them as such. They have the benefit today of being relatively cheap and easy to find, but 14Mb (under optimal conditions) is unacceptably slow for what I needed them to do. The Netgear XE102 was among the least-expensive and easiest-to-find devices using HomePlug 1.0. Linksys apparently still sells one, the PLEBR10, but I didn’t see it for sale anywhere.

  • HomePlug 1.0 with Turbo

“Turbo” HomePlug 1.0 devices aren’t part of the official HomePlug standard, but exist as a sort of de facto standard because of a feature in a particular Intellon chipset (the INT5500) that was used in many devices. “Turbo” mode provides up to 85Mb/s under optimal conditions, with reports putting real-world performance down around 20-30 megabits. Theoretically, HomePlug 1.0 Turbo devices from various vendors ought to be compatible. As with HomePlug 1.0, not all vendors seem to be forthcoming about labeling their products with the standard they actually use, but as far as I know, 1.0 Turbo devices are the only ones likely to be labeled as “85 Mbps”. Netgear labels their Turbo devices as “85Mbps Powerline”, eschewing the HomePlug branding completely.

  • Netgear “Powerline HD”

As far as I can tell, Netgear’s “Powerline HD” is a proprietary protocol used only by a handful of their power-line networking devices. It allegedly provides up to 200Mb/s, but isn’t compatible with 200 megabit devices from other vendors. The HDX101 seemed to be the most common device using this scheme, although there’s also the HDX111 which (despite being called “Powerline HD Plus”) is apparently identical except for providing a pass-thru outlet.

  • HomePlug AV

The newest version of the HomePlug multi-vendor standard is the ‘AV’ variant, which provides for speeds up to 200Mb/s (150Mb/s usable, after overhead) under optimal conditions, with QoS and AES encryption. HomePlug AV devices are available from several vendors, and seem to becoming the dominant power-line networking standard, displacing the 85Mb ‘Turbo’ and 200Mb proprietary devices at the top of Linksys’ and Netgear’s lineups. Netgear offers HomePlug AV — calling it “Powerline AV” — in the XAV101, priced at an MSRP of $80 ea. or two for $140 as the XAVB101. Linksys matches this with the PLE200 (PLK200 for the bundle of two), priced similarly.

At this point, I think a person would be foolish to buy anything except the newest HomePlug AV devices, since any of the earlier revisions are likely to become obsolete and hard to find soon. In particular, the proprietary 200Mb devices seem like they should be avoided like the plague.

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 12 Aug 2008

A few months ago, without giving it a whole lot of thought, I set up my home computers to retrieve my work email, by adding my work account to Apple Mail. This is, as any Apple Mail user can attest, pretty trivial to do. Even the relatively old version of Mail that I use, version 2.1.3 (753.1/753), has built-in support for Exchange servers, and is known to work reasonably well with the beast from Redmond. All you need to do is enter your Exchange server (“Incoming mail server” in Mail), Outlook Web Access server, and SMTP server information, and Mail does the rest.

How exactly Mail deals with Exchange servers is still a bit of a mystery to me; I think it stops short of emulating the Exchange protocol completely, but instead uses a combination of IMAP and HTTP connections; IMAP for some functions (mailbox and personal folders) and WebDAV into the OWA server for others (public folders).

I didn’t give it a whole lot of thought because it ‘just worked’ as soon as I put in the correct server information. My work email started showing up in my Mail inbox, and I considered the project finished. No trouble at all.

At least, no trouble on my end. Several weeks passed, and all the while my Macs were happily connecting to the hosted Exchange service that runs my work’s email (which shall, for the moment anyway, remain nameless), downloading messages and attachments. I even got S/MIME to work without issues. But then, out of the blue, I got an email from the operations department of the email-hosting service, asking me to give them a call right away.

When I called, I learned that something I was doing was causing thousands of “rendering errors” to pop up in their server logs. Initially they thought this was due to a corrupt message, but after checking all my messages (a tedious process), I mentioned that I was using Mail to connect. When I disabled Apple Mail, the errors stopped flowing. When I turned it back on, they restarted.

This, of course, went over pretty much like a fart in church. Since they don’t officially support anything but Outlook on Windows, if the problem couldn’t be resolved, I’d just have to stop using Mail. (Sadly the alternative — get another hosting provider — isn’t really an option.)

So far, I’ve yet to find a solution. It’s made difficult by the fact that the errors don’t seem to affect me on this end — as far as I can tell, Apple Mail works perfectly. But whatever it’s doing on the far end seems to really displease the hosting service’s sysops.

Googling hasn’t turned up anyone else having the same problem, either. This strikes me as odd — given that Apple Mail has a distinct option for connecting to an Exchange server, I doubt I’m the only person to try and use it. Furthermore, the problem seems to be specific to the desktop version of Mail; other people I know who get their email via the iPhone haven’t gotten any nastygrams, so it’s not an all-IMAP or even all-Apple issue. Yet it seems to happen with both of my Macs (both running 10.4 and Mail.app 2.1.3), regardless of whether they’re configured to connect to the hosting service via Exchange or IMAP, and whether they connect from home or some other location.

If anyone has ever experienced this problem, I’d be eager to hear any reports of possible solutions, or even just descriptions of what was happening. (Details on the errors from the server side in particular would be welcome, since the hosting company hasn’t been all that forthcoming with what’s going on.) Given the MS-centricity of the hosting provider, I think that finding my own solution to the issue is going to be the only way to continue using Mail.

The only potential solution I’ve come up with so far is to run a MUA/MTA on my home server (Ubuntu Linux), and have it fetch messages from the hosting provider via IMAP every few minutes, spool them, and then make them available to my Macs using UW-IMAPD or Courier IMAP. This strikes me as a nasty kludge and a possible source of significant delay in receiving messages, but it would at least create a Linux “buffer” between Apple Mail and Exchange. If they can’t be made to play nice with each other, this may be the only way to keep everyone happy.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 03 Aug 2008

This is just a quick breakdown, as far as I’ve been able to determine, of which models will allow you to access the ‘net through them, and which are too braindead/broken/Windows-centric. (This is not necessarily an exhaustive list, and is not guaranteed to be correct! Be sure to do your own research before purchasing or signing a contract.)

Options to consider:

  • BlackBerry 8800 (GSM)

Reported to work; source is Tom Yager of Enterprise Mac. The discussion forum on Fibble.org also has some instructions. This seems to be the starting point for most of the 88xx variants.

  • BlackBerry 8820 (GSM and WiFi)

Reported to work by BlackBerryForums member “CatherineLW”, via Bluetooth only.

  • BlackBerry 8830 (CDMA and Euro-GSM)

Reported to work via Bluetooth, not via USB. Follow-up article here. It’s also described as working (in “exactly 2 minutes”) in this article.

  • BlackBerry 8300 “Curve” (GSM)

Reported to work. Page includes links to required modem script and information on init strings. In the comments there are intermittent problems reported, so it’s apparently not a foolproof solution.

All information seems to relate to Bluetooth tethering; there’s no mention of success (and lots of failures) trying to tether via USB. Apparently USB tethering is, for some reason, only possible from Windows.

  • BlackBerry 8320 (GSM and WiFi)

The 8320 is a special version of the 8300 made for T-Mobile; it includes some additional features including 802.11a/b/g UMA calling.

There are some scattered reports indicating that it works, and some others saying that it doesn’t. It seems like it ought to work; problems may relate to bad software revisions.

  • BlackBerry 8100 “Pearl”

Reported as working by Dave Taylor of AskDaveTaylor.com, and Grant Goodale of Fibble.org.

However there are serious issues with particular software revisions. Software version 4.2.1.107 in particular, which was pushed out to T-Mobile phones, is known to have issues.

Options to avoid:

  • BlackBerry 8700

For some reason, normal Bluetooth DUN methods don’t work with the 8700 series, which is unfortunate because it’s relatively inexpensive and in all other respects a nice phone (particularly the ‘G’ revision).

There’s a whole saga of efforts, including a substantial ‘bounty’, put towards getting this thing working as a USB or Bluetooth-tethered connection, but there doesn’t seem to be a very satisfactory solution. The closest anyone seems to have gotten is a $50 software package called “Pulse” which allows tethering via a proxy server that you must run (or pay for the use of), through which all traffic flows. Although I appreciate the effort involved, this doesn’t strike me as particularly elegant — frankly it’s unacceptable that it’s even necessary. Anything that requires that much of a workaround to use is broken.

The bottom line:

It’s not really much of a surprise that RIM doesn’t seem to focus very heavily on anything except Windows, given their established userbase in the corporate market, but it’s still a bit disappointing. The best BB device for Mac users at the current time seems to be one of the 8800 series, either the 8800 itself (currently retailing for $280) or the 8820/30 variants depending on whether you want GSM or CDMA service within the US. Either the Curve or the Pearl would seem to be a close second; I only give the 8800 an edge because it’s newer, and will probably be getting more attention for longer than either of the older models.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

While this isn’t breaking news or anything, Jeff Starr has a nice tutorial posted over at PerishablePress.com, explaining how to set up a BlackBerry Curve as a Bluetooth DUN device with a Mac. This allows you to connect to the Internet from the Mac via the BlackBerry, provided you have a data plan that supports ‘tethering’. (This includes — to my knowledge anyway — all T-Mobile unlimited data plans including ‘BlackBerry Unlimited’, but only some AT&T plans.)

The solution is of the same form as for most other modern phones: a custom modem script that gets dropped in /Library/Modem Scripts/ and a CID string that tells the phone to open a data connection to the network.

The instructions are specifically for the Curve, aka the 8300, but a very similar procedure works for the 8800, with a slightly different modem script. (Also see this EnterpriseMac article.)

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

Thu, 31 Jul 2008

I spent a little time earlier this evening looking at hard drive prices. Since I’m a spreadsheet junkie, ‘comparison shopping’ tends to involve a lot of copying and pasting into Excel or Google Spreadsheet. This was no exception, and the results clearly showed a price-per-GB “sweet spot” in the 750GB drives.

Although we’d expect drives to get cheaper, in terms of capacity per dollar, over time (that’s what all those engineers at Seagate and Hitachi are paid to do, after all), it’s almost always been the case that the cheapest storage trails the technological bleeding edge by a certain amount. Principally I think this is due to the drive manufacturers overpricing the newest drives compared to older ones, in order to squeeze the early adopters for all they’re worth.

Right now, 1TB drives are selling at a slight per-GB premium compared to 750 and 500 GB models; it’s not until you get down into 320 and 250 GB drives that the per-GB price starts to creep back up above them. Hence, if you’re not desperate for the full terabyte today, it’s better to buy a slightly smaller drive and wait a few months for prices to drop some more.

Anyone interested in the actual data can view it either via Google Docs, or as CSV. (The Google Docs is preferable unless you have a burning desire to load it into Excel and do an X/Y plot.)

Google Docs Link
August 2008 Hard Drive Prices - 1kB Comma-Separated ASCII text (CSV)

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 22 Jul 2008

Several months ago I wrote about the legal problems facing electronic ‘alternative currencies’ and the shuttering of one particularly sketchy operation — e-gold-based ‘meta-currency’ 1MDC.

Now it seems that the owners of E-Gold are facing stiff fines and possible prison time after pleading guilty to conspiracy to engage in money laundering and operating an unlicensed money-transmitting business, an indictment E-Gold’s founder once called “a farce.”

Basically, the Feds really didn’t like the core strength of E-Gold, which was that it provided a way to anonymously transfer funds without any sort of user verification. E-Gold didn’t make you prove who you were, and thus there wasn’t any prohibition on how many accounts you could have, which meant that there wasn’t a way to really bar someone from using the service — close down one account, and they could just open up a new one.

Unsurprisingly, the plea agreement includes a “comprehensive money-laundering-detection program that will require verified customer identification” — in short, an end to anonymous transfers.

Although E-Gold never amounted to much in the world of legitimate commerce, and it probably would be little missed by most people if it disappeared completely as a result of the changes, it’s unfortunate and sad to see yet another early-Internet dream — that of anonymous, untraceable electronic currency, immune to the whims of national law or taxation — go (dare I say it) down the tubes.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Fri, 18 Jul 2008

Although I suspect that I’m probably among the last to read it, I ran across Richard W. Fisher’s excellent speech to the Commonwealth Club of California, earlier today. Called “Storms on the Horizon”, it was delivered May 28 in San Francisco.

I think it’s worth a read by anyone; despite being a few months old at this point, it’s still quite topical. His main focus is on fiscal (as opposed to monetary) policy, which hasn’t been getting very much attention lately. In particular, he concentrates on the issue of unfunded Social Security and Medicare liabilities, and the effect they will have on the overall government budget deficit.

His general premise — that both Social Security and Medicare, but especially the latter, cost tremendous amounts of money — is not very controversial. Where he splits from the current administration’s party line is over whether we’ll have the ability to pay for them in the not-too-distant future without going into the red.

In keeping with the tradition of rosy scenarios, official budget projections suggest [the current] deficit will be relatively short-lived. They almost always do. […] If you do the math, however, you might be forgiven for sensing that these felicitous projections look a tad dodgy. To reach the projected 2012 surplus, outlays are assumed to rise at a 2.4 percent nominal annual rate over the next four years — almost double the rate of the past seven years. Using spending and revenue growth rates that have actually prevailed in recent years, the 2012 surplus quickly evaporates and becomes a deficit, potentially of several hundred billion dollars.

That deficit is driven in large part by the costs of Social Security and Medicare, which — especially when viewed long-term — are staggering to behold. Fisher gives the net present value of only the unfunded portion of both programs as $99.2 trillion USD; if paid yearly (‘pay-as-you-go’) instead of up front, as they would in a balanced budget, they represent 68% of current income tax revenue.

If that doesn’t give you immediate pause, it should. Particularly as we seem to be headed for an economic downturn, that 68% will only increase if income tax receipts decline. The bottom line is brutal:

No combination of tax hikes and spending cuts, though, will change the total burden borne by current and future generations. For the existing unfunded liabilities to be covered in the end, someone must pay $99.2 trillion more or receive $99.2 trillion less than they have been currently promised. This is a cold, hard fact. The decision we must make is whether to shoulder a substantial portion of that burden today or compel future generations to bear its full weight.

Or, of course, the third path, the one no politician wants to mention: cut back drastically on benefits. In reality I think it’s inevitable that this will be a major part of any solution. Nothing else will work, particularly if there’s a serious recession or depression. Fat chance selling the American public on that, though, especially those who have spent decades paying into a system that was supposedly for their retirement, but was actually being looted by Congress for other purposes.

Fisher warns against the temptation presented by the Mint:

We know from centuries of evidence in countless economies, from ancient Rome to today’s Zimbabwe, that running the printing press to pay off today’s bills leads to much worse problems later on. The inflation that results from the flood of money into the economy turns out to be far worse than the fiscal pain those countries hoped to avoid. […] Even the perception that the Fed is pursuing a cheap-money strategy to accommodate fiscal burdens, should it take root, is a paramount risk to the long-term welfare of the U.S. economy. The Federal Reserve will never let this happen. It is not an option. Ever. Period.

This at least is reassuring — or, rather, it should be. But as many have noted, the Fed has essentially been playing the cheap-money game for a while, and continues to play it today, by stoking the bubble economy with bargain-basement interest rates. While this admittedly isn’t Zimbabwe or Weimar Republic-style money printing, it certainly undermines the Fed’s credibility when it claims to have long-term health rather than short-term painlessness in mind.

Towards the end of the speech, Fisher points the finger at the place where the buck really stops: voters.

When you berate your representatives or senators or presidents for the mess we are in, you are really berating yourself. You elect them. You are the ones who let them get away with burdening your children and grandchildren rather than yourselves with the bill for your entitlement programs.

However, I take a little issue with his conclusion:

Yet no one, Democrat or Republican, enjoys placing our children and grandchildren and their children and grandchildren in harm’s way. […] You have it in your power as the electors of our fiscal authorities to prevent this destruction.

While I appreciate the sentiment (and his need to end on something other than a doom-and-gloom note), I see no evidence to support his assertion that either Democrats, Republicans, or the American public at large have any problem burdening their children and grandchildren in order to get a check cut today. Over and over again, we have seen just that happen. Voters are only too happy to pay Tuesday for their hamburgers today.

The voters have it in their power to prevent a disastrous fiscal policy crisis from taking shape, but they haven’t done so thus far, and I see little reason why that will change at the 11th hour.

0 Comments, 0 Trackbacks

[/politics] permalink

Thu, 17 Jul 2008

As the markets have sunk further and further over the past few weeks, predictions of where we’re headed, either as a nation or the entire world, whether rosy or Mad-Maxian, have flourished. I admit to having a slightly more-than-academic interest in all this — after all, I live here, too — but rather than take my own shot in the dark, I thought it would be more useful to try and round up a few of the most interesting predictions or forward-looking statements made by others.

The diversity of opinion on where we’re headed really can’t be overstated. Although I think the overall outlook is getting bearish, there’s still room for disagreement (or at least there appears, to my eye, to be room for disagreement) as to how bad things are going to get, how quickly they’re going to go, and how long they’re going to take to recover — if they ever will.

Time will just have to tell which were in retrospect prescient and which were following in the great tradition of being somewhat less so.

(Macroeconomic predictions are by their nature almost always incorrect in one respect or another, so this is purely for entertainment, and shouldn’t be used to judge any of the people involved, now or in the future.)

Russ Winter: Super-Bear

Russ Winter, of the Wall Street Examiner’s “Winter Watch” blog, paints a grim picture: corrupt Washington insiders looting the U.S. economy for everything it’s worth, fleecing consumers and taxpayers in order to ensure they’ll live like kings in the coming economic downturn.

This isn’t just a recent bandwagon he’s hopped on, either; he’s been saying it pretty consistently for several years now. Back in 2006 he correctly called BS when Alan Greenspan predicted that the worst of the housing bubble “may well be over.” (The foreclosure rate has only continued to climb since then.) I haven’t looked back much further in his archives, but he’s been writing online since late 2005.

Select Predictions and Statements:

  • The current Fannie and Freddie Mac ‘crisis’ is being used as a “Reichstag Fire” to provide an excuse for vast amounts of assets to be transferred to foreign interests, while a small number of cronies reap the profits.

  • “I believe an incredibly large amount of American assets and economic capacity will pass fairly quickly into the hands of Pig Men [“the financial sphere, typically brokers, banks, Fed dealers”] interests before Bush leaves office. There is going to be a massive unprecedented rearrangement of the money tree.” Source. (Def. of “Pig Men” from here.)

  • “[I]t really does look like the next crisis [after Fannie and Freddie] is Lehman Brothers.” Source. (With a link in the original to this page.)

  • “Oil demand numbers in the US and globally are clearly falling off a cliff. Don’t be too surprised to see a panic drop in oil soon enough, maybe on the order of $15-20 in one day to catch up with what has already happened.” Source.

Metafilter’s Own: Malor

Malor, a well-known contributor on MetaFilter, also has a less-than-rosy outlook.

Rather than putting words into his mouth, I’ll quote him directly. The following is excerpted from this post:

[T]he financial system [has become disconnected] from any kind of on-the-ground reality. Stock and house prices have gone far into ridiculous territory, driven there by a combination of stupidly low interest rates and a massive oversupply of what looked like available capital. It also has caused an enormous, gigantic, unbelievable trade imbalance and debt position on the part of the consumer. […]

There is no good outcome here. None. We’ve backed ourselves into a corner from a series of incredibly bad decisions. If the Fed screws up, or if it miraculously realizes that we’re doomed if it doesn’t, we will have a massive deflationary crash, the Second Great Depression. As the debt we’ve taken on goes bad, it will cause deflation, the deflation that has been hidden from us by the monetary games. This is the best possible outcome.

A deflationary crash is one possibility; hyperinflation another (from this post:

If the Fed can keep us on the tracks, [hyperinflation is] inevitably where we’re going to end up. We have too much debt, and to try to hide that fact, the Fed is causing more and more debt to be issued.

This is, he continues, fundamentally flawed:

The US government is doing the same thing Zimbabwe is doing; trying to extract more value out of the economy than the economy can support. We’re already over fifty trillion dollars in debt, in today’s dollars. […] We can’t pay those debts. We can’t pay off our personal debts. And we can’t service the enormous position that other countries have in our currency, which is another, hidden form of debt.

The bottom line:

It’s a house of cards. It has to collapse. Which way it will collapse, I don’t know, but it has to go into either deflation or hyperinflation.

0 Comments, 0 Trackbacks

[/finance] permalink

From here, which has photos of each. Unfortunately there’s not an easily downloadable list so you can compare your alcoholism to that of your friends’, so I typed one up. Enjoy.

Best Hotel Bars List: 1.17kB ASCII.

0 Comments, 0 Trackbacks

[/other] permalink

Wed, 16 Jul 2008

One of the things that’s frustrated me for a while in Emacs is working with diacritics (accented characters) and other international text. Although as a basically monolingual English-speaker I do most of my writing well within the low-ASCII range, every once in a while I find it necessary to reproduce an accented word or string of international text.

Although typing accented characters (and other Latin-1 symbols) is very easy on a Mac in a native editor like TextMate, I’d never spent the time to figure out how to do it in Emacs. However, since Emacs is sort of the least-common-denominator editor, I decided it would be worth figuring out; unlike OS-specific dead-key methods, the Emacs way should work anyplace Emacs is installed. (And I use Emacs regularly on Mac OS X, Windows, Linux, and NetBSD — although the latter two are usually only through SSH sessions.)

Anyway, actually entering accented characters and other basic non-ASCII characters is the easy part. The easiest way is to turn on ‘iso-accents-mode’ within Emacs, and then let it convert character sequences (like “-a for รค) to their Latin-1 equivalent.

The trickier part was getting them to display correctly. The first time I tried using iso-accents-mode, the non-ASCII characters were just displayed as question-mark (?) characters. I quickly traced this to a problem in Emacs, rather than in my terminal (by saving the file and then displaying it with cat, which showed the characters properly), and then with a little more research, to an issue with the “terminal-encoding” parameter.

Basically, Emacs’s “terminal encoding” controls what character set Emacs uses when displaying text (sending it to the terminal device that you’re using to interact with it). It’s distinct from the character set that the file is actually being interpreted using, and also possibly separate from the character set that’s used to interpret keyboard input.

Since I have a UTF-8 terminal (set using the “Window Settings” window, under the Terminal menu, in OS X’s Terminal.app), I set Emacs to use UTF-8 as its terminal encoding by adding the following to my .emacs file:

(set-terminal-coding-system 'utf-8)

With this done (both locally and on the remote systems I SSH into), I was able to see all the non-ASCII characters properly. In fact, not only were Latin-1 characters correctly displayed, but Unicode smartquotes and symbols were also correctly displayed for the first time.

The only issue I anticipate with this is that, when I do connect from a non-UTF-8 terminal (like Cygwin’s Win32 version of rxvt), I’m probably going to get garbage instead of Unicode. However, that’s not really the fault of Emacs, and it’s always possible to temporarily change the terminal encoding back to ASCII if necessary. I just want UTF-8 to be the default.

References:

  • Information on permanently setting the terminal-coding-system came from this osdir thread.
  • General information on Emacs terminal encoding came from the Emacs documentation, section 27.14, accessible here.
  • Also handy is section 27.17 on “Undisplayable Characters”

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 15 Jul 2008

I ran across a nice blog posting by Steven Frank’s while trolling through Reddit earlier today, and I thought he was right on: “Don’t Use FTP” is pretty good advice for just about anyone.

It’s not that FTP wasn’t a good idea when it was designed; it was nice, it worked, and it served us all well for many years. But it just hasn’t aged well. As Frank points out (see “Note 2” down towards the bottom), although there are many other protocols still in use that were created around the same time, most of them have been extensively updated since then. FTP hasn’t; the defining document for the protocol — insofar as one actually exists — is still RFC 959, written in 1985.

It’s a bit unfortunate that it’s been allowed to languish, because it does serve a need (which is why it’s still around, despite its insecurity and firewall-traversal issues and everything else): it’s a lingua franca for bulk file transfers between systems. It’s certainly better, in theory if not in practice, than abusing port 80 and HTTP for the same purpose. However, given that alternatives (SFTP in particular) exist, there’s really no excuse for using it in new installations or for interacting with a modern hosting environment. Any commercial provider that only offers FTP as a bulk-transfer option should be called publicly onto the carpet; that’s simply not acceptable practice in 2008.

0 Comments, 0 Trackbacks

[/technology] permalink

Mon, 14 Jul 2008

It’s a new week, and for me, that means a new plane trip. And a new plane trip means new reading material.

Having finished Jared Diamond’s (excellent) Collapse — post forthcoming, eventually — I’ve moved on to GMU Professor Rick Shenkman’s book Just How Stupid Are We?

I saw Shenkman on “The Daily Show” a few weeks ago and ordered the book based pretty purely on that; he seemed like an intelligent guy making an interesting point. (Also, I needed something to round out an Amazon order. Yay for free shipping.)

It’s a short book, written in fairly large type. Perhaps this is appropriate given Shenkman’s overall thesis: over the past 50 or 60 years, we as a society have given the ‘American Voter’, otherwise known as ‘The People’ (as in “we the People…”) far too much credit and far too little blame for our policy failures as a nation. In other words, we’re all a lot more stupid than we like to think (and have our leaders tell us) we are.

In our search for places to lay blame, few stones have been left unturned. Bankers, investors, lobbyists, corporate executives, trial lawyers, members of the media, and of course politicians generally have all faced criticism. But only very rarely does anyone take the American people, collectively and as a group, to task for their complicity for the outcomes of government.

It’s a controversial question to ask because most of us have been taught, and probably believe quite sincerely, that “more democracy = better”, and it’s hard to blame the people for much of anything without considering whether that’s necessarily always true. Put bluntly: ‘Is more democracy really better democracy, if the people, by and large, show little-to-no inclination to do anything besides blindly accept whatever they’re told?’ Even raising the question endangers some very sacred American cows, and opens the questioner to accusations of being “undemocratic” or “elitist”.

One thing that I haven’t encountered in the book so far — and I’m about 60% of the way through, and will hopefully finish it later this week — are any proposed solutions to fix the system that we’ve created. It’s all well and good to criticize how we got to where we are, but that doesn’t provide much help in moving forward. So I’m hopeful that he’ll make some suggestions as to how the level of discourse or the system in general can be improved.

I’m holding off overall judgment on the book until I’ve finished it, but in general I thought the premise was pretty good. We’ll see if my feelings change once I make it through the conclusion.

0 Comments, 0 Trackbacks

[/other/books] permalink

Fri, 04 Jul 2008

I do most of my Usenet reading through an SSH session using the slrn newsreader, which in my opinion is one of the best around (better than gnus even, although I still use Emacs as an editor). One of the better things about it is its very flexible killfile system. In reality slrn doesn’t have a “killfile” per se, instead it has a “scorefile”, which allows you to apply numeric scores to articles based on regular expressions, killing them when they drop below a threshold.

Anyway, since it allows the use of regular expressions, it’s useful for filtering out “sporgery” and spam designed to defeat less-flexible filtering, like the MI5 Persecution nonsense.

Here’s a set of rules I set up for killing the latest batch of crap:

[*]
% Kill "MI-5 Persecution" crap
Score:: =-9999
   Subject: [A-Z][',-`. ]I[',-`. ]5[',-`. ]P
   From: MI5Victim

The first rule (the Subject: one) is designed for the latest batch, which have varied subject lines and randomly-generated From-addresses. The second rule (the From: one) is for catching the older batch of messages, which all used the same From-address and didn’t vary their headers as much. I keep the old rule around because I sometimes like to read groups where there isn’t much activity, and thus end up seeing them almost as often as I do the new ones.

It’s almost a certainty that the rules will have to be tweaked, or a new rule added, the next time a bunch of messages come out, if the spammer continues to ‘enhance’ the headers to defeat filtering. That is, of course, unless MI5 gets him first. But somehow I doubt we’re all that lucky.

The regexp used to catch the newer messages was taken from Wikipedia, and it seems to work fine, although I’ve been thinking of tweaking it a little more. Ideally I’d like to broaden it until there’s no possible permutations of the subject that wouldn’t get caught, regardless of letters placed in-between the message characters, or any similar-character replacements (e.g. replacing the letter “I” with “|”, or other similar L33T-type stuff).

I’ve only begun playing more seriously with slrn and its scoring features, so as I get a decent scorefile worked out, I’ll probably post some occasional updates, just in case somebody wants to use it as a starting point.

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 01 Jul 2008

I was pleased to read today that Netflix has come to its collective senses and decided to save the “Profiles” feature. For those of you living under a rock, Profiles was a neat feature that Netflix offered, allowing you to essentially split your account into ‘sub-accounts’ each with their own queue and number of simultaneous movies. This was pretty nice if you had multiple people (say, family members, or you and a S.O.) sharing the same account.

Their elimination of the feature was ostensibly to simplify the website by removing a feature that few users actually took advantage of, but many felt it was done more to encourage the purchase of multiple accounts (which cost more than one account, even one with many movies at a time).

This is by any measurement a good thing. Netflix avoided doing something very stupid, and alienating its userbase (probably driving more than a few of them right into the arms of the competition, Blockbuster) by announcing its intentions, listening to the response, and then changing their tune when it became obvious they were about to shoot themselves in the foot. All good. This should be a lesson to others on how to craft policy that affects your users.

Unfortunately, they had already disabled access to the feature for most users, apparently in preparation for killing it outright. (Which is a bit of a drag for folks like me, who were holding out because they’d only heard of it as a result of the hubbub and didn’t want to try something that was on its way out.) But according to the official blog, the option to create new profiles will return in a couple of weeks. Here’s hoping.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Thu, 26 Jun 2008

Earlier this week I ran into a pesky issue when using slrn on a remote machine, inside Screen, over an SSH session, via rxvt, running under Cygwin on a WinXP box. The problem looked like this, and seems to be some sort of either character-encoding or display problem with non-ASCII characters used in slrn’s text-mode interface.

It’s an obnoxious problem because it rather seriously interferes with slrn’s thread-tree display, and because there are so many different layers involved, any of which could be to blame. Starting from the user and working backwards towards the source, there’s rxvt, SSH, Cygwin, Windows, Screen, slrn, and Linux, any of which could be causing it. (Although some are a lot more likely than others.)

The simplest and least elegant way to solve the problem is just to force slrn into pure-ASCII mode, by putting

set simulate_graphic_chars 1

into .slrnrc. However, that just seems wrong. VT100 box-drawing characters, which is all slrn seems to be using, aren’t exactly high-tech stuff — we’re talking about the very best of 1978, here. This isn’t Unicode or anything sexy; it’s just an alternate 7-bit character set triggered by “shifting in” and “shifting out” using escape sequences.

I think the problem is probably a termcap/terminfo issue, but I haven’t been able to get any results via any combination of terminfo settings that I’ve tried.

0 Comments, 0 Trackbacks

[/technology] permalink

Sat, 14 Jun 2008

A few days ago I mentioned I’d picked up Jared Diamond’s book Collapse as my travel reading. Due to a few long airport delays (thanks, Delta), I’m now more than halfway through.

Rather than holding my thoughts until the end, I’ll be blunt: so far, I’m tremendously impressed. It’s a much more engaging read than Guns, Germs, and Steel, perhaps because it’s focused more narrowly, and the style is a little less academic. It is at times a downright chilling book.

Diamond’s discussion of the fate of the Easter Islanders is often mentioned in summaries and descriptions of the book, maybe because it’s in the first chapter. However, he spends far more time talking about the fate of Norse Greenland, and from the perspective of a modern American, it’s an easier story to relate to.

There’s a certain horror-movie quality to reading about the downfall of societies, especially ones who arguably doomed themselves or contributed to their own demise. Except instead of “don’t go in the basement!” it’s “don’t cut down those trees!” or “don’t try to graze sheep there!” But we know, of course, what’s going to happen.

You would have to be particularly thick to read Collapse and not draw substantial parallels to the fragility of our current society — not least of all because Diamond sometimes goes out of his way to explicitly make the point. Recently, the New York Times Sunday Book Review asked a number of prominent authors for books that they’d like to see the current crop of Presidential candidates reading. Personally I’d be happy if any of them picked up Collapse.

0 Comments, 0 Trackbacks

[/other/books] permalink

Fri, 13 Jun 2008

I’ve been pretty pleased with results of my experimental entry into the world of VoIP, because it had been working without a hitch. Up until tonight, anyway.

I noticed the problem when I went to call the new home VoIP number from my cellphone, and got a “Not available” message from Callcentric. I know immediately something was not right, because that shouldn’t ever happen (unless the power was out or Internet service was interrupted). When I got home I logged into the router’s configuration page, and discovered that the line was no longer registered with Callcentric’s servers.

I started off by fixing the obvious things, including network connections and a power cycle. I made sure I could ping Callcentric, so no problems there. The configuration on the ATA matched their website (plus, it had been working fine for a week), so hopefully no problems there. To rule out NAT issues, I put the ATA temporarily in the LAN DMZ. Still no dice.

Getting a little more desperate, I turned on the SPA-2102’s syslog feature, turned the debug verbosity up, and started tailing the output on my PC. The result was mildly enlightening:

 Jun 12 00:33:25 192.168.1.150 system request reboot
 Jun 12 00:33:25 192.168.1.150 fu:0:45af, 0038 043c 0445 0001
 Jun 12 00:33:25 192.168.1.150 fu:0:4605, 03e4 05b0 0001
 Jun 12 00:33:30 192.168.1.150 System started: ip@192.168.1.150, reboot reason:C4
 Jun 12 00:33:30 192.168.1.150 System started: ip@192.168.1.150, reboot reason:C4
 Jun 12 00:33:30 192.168.1.150   subnet mask:    255.255.255.0
 Jun 12 00:33:30 192.168.1.150   gateway ip:     192.168.1.1
 Jun 12 00:33:30 192.168.1.150   dns servers(2): 
 Jun 12 00:33:30 192.168.1.150 192.168.1.1 
 Jun 12 00:33:30 192.168.1.150 71.170.11.156 
 Jun 12 00:33:30 192.168.1.150 
 Jun 12 00:33:30 192.168.1.150 fu:0:4648, 03f6 0001
 Jun 12 00:33:30 192.168.1.150 RSE_DEBUG: reference domain:_sip._udp.callcentric.com
 Jun 12 00:33:30 192.168.1.150 [0]Reg Addr Change(0) 0:0->cc0bc017:5080
 Jun 12 00:33:30 192.168.1.150 [0]Reg Addr Change(0) 0:0->cc0bc017:5080
 Jun 12 00:33:38 192.168.1.150 IDBG: st-0
 Jun 12 00:33:38 192.168.1.150 fs:10648:10720:65536
 Jun 12 00:33:38 192.168.1.150 fls:af:1:0:0
 Jun 12 00:33:38 192.168.1.150 fbr:0:3000:3000:04605:0002:0001:3.3.6
 Jun 12 00:33:38 192.168.1.150 fhs:01:0:0001:upg:app:0:3.3.6
 Jun 12 00:33:38 192.168.1.150 fhs:02:0:0002:upg:app:1:3.3.6
 Jun 12 00:33:38 192.168.1.150 fhs:03:0:0003:upg:app:2:3.3.6
 Jun 12 00:33:39 192.168.1.150 fu:0:465a, 0003 0001
 Jun 12 00:34:02 192.168.1.150 RSE_DEBUG: getting alternate from domain:_sip._udp.callcentric.com
 Jun 12 00:34:02 192.168.1.150 [0]Reg Addr Change(0) cc0bc017:5080->cc0bc022:5080
 Jun 12 00:34:02 192.168.1.150 [0]Reg Addr Change(0) cc0bc017:5080->cc0bc022:5080
 Jun 12 00:34:34 192.168.1.150 RSE_DEBUG: getting alternate from domain:_sip._udp.callcentric.com
 Jun 12 00:34:34 192.168.1.150 [0]RegFail. Retry in 30

After that, there are just a lot of “unref domain” errors, repeated over and over every 30 seconds, as the 2102 tries to register and can’t. (Can we hear it for the guy at Linksys who got them to keep the remote logging feature?)

From this we can tell a few things. It looks like the 2102 is booting up, and then it’s looking for Callcentric’s SIP server, by querying the DNS SRV record. This is as it should be. However, for some reason it’s apparently not getting back the right server to use.

Just as a first shot to eliminate DNS issues, I changed out the DNS server values in the 2102 configuration (normally, I use my gateway/router, which lives at 192.168.1.1) with my ISP’s DNS servers. No improvement. Then, I decided to try pulling the SRV records manually, to see if there was an obvious misconfiguration on Callcentric’s part, or if they weren’t returning DNS SRVs.

Without getting into a whole sidetrack on how DNS SRV records work, the way to pull them is via dig. To get the server and port for SIP traffic carried on UDP for the Callcentric.com domain, you would run

 $ dig _sip._udp.callcentric.com SRV

 ; <<>> DiG 9.3.2 <<>> _sip._udp.callcentric.com SRV
 ;; global options:  printcmd
 ;; Got answer:
 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11397
 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 3, ADDITIONAL: 5

 ;; QUESTION SECTION:
 ;_sip._udp.callcentric.com.     IN      SRV

 ;; ANSWER SECTION:
 _sip._udp.callcentric.com. 1800 IN      SRV     5 5 5080 alpha4.callcentric.com.
 _sip._udp.callcentric.com. 1800 IN      SRV     5 5 5080 alpha2.callcentric.com.

This tells us that UDP SIP traffic should be directed to either alpha2.callcentric.com or alpha4.callcentric.com, both on port 5080. The servers have equal priority so either one can be used. Running a quick host alpha2.callcentric.com gives the A record for that server, which turns out to be 204.11.192.23.

What we’ve accomplished at this point is what the SPA-2102 is supposed to do every time it tries to register with Callcentric. Query the domain-level SRV record to get the particular server for SIP traffic, and then query that server’s record for its IP address, and then connect to it. We just did that, and now have an IP and port.

To see if that server worked, I put it into the SPA’s incoming and outgoing proxy fields, and turned “Use DNS SRV” off. Lo and behold, after I rebooted it, I was back online.

For the moment, anyway, things are working again. However, they’re not working the way they’re supposed to. If Callcentric decides to change its server’s IP address, I’ll no longer be able to connect. Ditto if that particular server gets overloaded. All the benefits of DNS are lost when you go this route. Therefore, it’s not really a satisfactory long-term solution.

I’ve opened a trouble ticket with Callcentric and will see what they say. Googling terms like “RSE_DEBUG” and “unref domain” produce some results — I’m apparently not the only person to have experienced this problem! — but no good solutions. It’s obviously a DNS problem, but who’s exactly to blame isn’t clear. I suspect Callcentric is going to blame either the ATA configuration or my LAN setup, and in their defense, their DNS records seem to be correct. However I can’t see how the problem can be misconfiguration when it worked well for more than a week. I suspect I’ll probably end up on the phone with Linksys eventually.

If I do figure out some sort of solution, or even a satisfactory explanation, I’ll be sure to post it. In the meantime, if anyone happens to come across this page because they’re experiencing the same problem, the only workaround I’ve found is to manually query the SIP server IP and put that into the 3102’s configuration. (And pray your VoIP provider’s IP address assignments are relatively stable.)

Any thoughts or suggestions are, as always, appreciated.

FOLLOWUP: I got a form response back from Callcentric noting that my device was registered again, and blaming the problem on my Internet connection. (Of course, it was back up because I put the IP address in directly.) However, when I went back to using DNS SRV, it seemed to work fine … which really annoys me, because if there’s one thing I hate more than stuff that doesn’t work, it’s a stuff that breaks unpredictably and for no reason.

0 Comments, 1 Trackbacks

[/technology] permalink

Tue, 10 Jun 2008

Having finished Gang Leader for a Day (someday soon I’ll get around to writing up some of my final thoughts), I’ve moved onto Jared Diamond’s Collapse for my sitting-in-airports reading. Although I’ve barely made it through the introduction, so far I’m impressed. Despite his tendency to be longwinded — the major criticism of Guns, Germs, and Steel that I agree with — he seems to have a good grasp of the complex issues underlying modern environmental issues.

There’s a choice quote in the first chapter that I wanted to highlight. Diamond quotes environmentalist David Stiller, writing about the nature of the corporation as an entity.

“ASARCO [American Smelting and Refining Company {…}] can hardly be blamed [for not cleaning up an especially toxic mine that it owned]. American businesses exist to make money for their owners; it is the modus operandi of American capitalism. {…} Successful businesses differentiate between those expenses necessary to stay in business and and those more pensively characterized as ‘moral obligations.’ Difficulties or reluctance to understand and accept this distinction underscores much of the tension between advocates of broadly mandated environmental programs and the business community.

(Text in square brackets is Diamond’s, in curly braces is mine.)

This is a good point and bears much repeating. Corporations aren’t immoral, they’re amoral. Asking corporations to act ‘morally’ is like asking water to flow uphill. We’d do better to make the behaviors that we want — protecting the environment, treating workers fairly, whatever they may be — profitable, either by creating genuine incentives, or by punishing noncompliance, than to ask nicely and cluck our tongues when our toothless requests are ignored.

On the other side of the coin, Diamond seems to also appreciate that as simple as corporations are, actual human beings are not.

Whenever I have actually been able with Montanans, I have found their actions to be consistent with their values, even if those values clash with my own or those of other Montanans. That is, for the most part Montana’s {environmental} difficulties cannot be simplistically attributed to selfish evil people knowingly and reprehensibly profiting at the expense of neighbors. Instead, they involve clashes between people whose own particular backgrounds and values cause them to favor policies differing from those favored by people with different backgrounds and values.

Together, I think these two statements could be applied truthfully and insightfully to a wide range of current issues. The motives of other people, including and perhaps especially those with whom we disagree strongly, are seldom as simplistic as they appear. The motives of abstract, non-human actors like corporations, however, despite being made up of people, are often relatively simple.

It’s a mistake to reify corporations, and it’s equally a mistake to treat other real people like automatons. Both mistakes may produce what seem to be good predictions at first, but will fail in the long run; corporations don’t have a moral center, and will frequently do things that nobody in them as an individual would ever consider doing themselves, and virtually no one gets up in the morning intent on doing what they percieve to be evil.

If we want to produce realistic, workable solutions to pressing problems, one of our first steps has to be eliminating fallacious assumptions, no matter how satisfying (for example, perceiving those we disagree with as evil morons) they may be.

0 Comments, 0 Trackbacks

[/politics] permalink

Sat, 07 Jun 2008

After doing my due diligence, combing NewEgg and the greater Internet for more than a week, reading every blog review I could find, and even making a little comparison chart, I decided to take the plunge and ordered myself a VoIP ATA.

At the last minute, I passed up the favorite for most of my comparison, the PAP2-NA, ordering the slightly more full-featured SPA-2102 instead. Although it allegedly lists for $110, I picked it up from Telephony Depot for $58, with after shipping was the best deal I could find.

The 2102 arrived yesterday, and I got a chance to play around and set it up last night. Overall, the installation process went smoothly, although I did run into one significant hiccup. The 2102’s installation and setup documentation is sufficient if you’re planning on using it at the edge of your LAN, but if you want to have it inside the LAN, you’re mostly on your own. Furthermore, the paper documentation for the voice-prompt interface is flat-out wrong in several areas, giving incorrect values for options (a problem that I believe stems from a mismatch between the firmware revision on the box and the version the docs were written for).

After having to reset the box several times — it’s possible to get it into a basically un-configurable state by switching it into bridge mode, when combined with the poorly-document voice prompt — I began writing up notes on the ‘right’ order to change the 2102 from gateway to statically-addressed, internal mode.

To do it, you’ll need a laptop or other computer with an Ethernet port that you can disconnect from your home LAN. (You definitely don’t want to plug the 2102 into your LAN un-configured, since out of the box it acts as a DHCP server.)

SPA-2102 LAN Setup Notes - 2.27kB ASCII text

0 Comments, 0 Trackbacks

[/technology] permalink

Thu, 05 Jun 2008

For reasons not really germane here, I ended up typing up a very long email a few days ago, basically comprising a very rough introduction to VoIP. It’s less of a “guide” than it is just a braindump, but I thought I’d toss it up online, let Google do its magic, and perhaps it would be helpful to someone.

It can be found here:
VoIP Infodump - 16kB ASCII text

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 04 Jun 2008

While poking around on Wikipedia I came across this interesting graphic. It’s a map of the Regional Bell Operating Companies (RBOCs), the regional telecom monopolies — I’m sorry, I meant incumbent carriers — showing their coverage of the U.S. both today and back at deregulation in 1988. It’s worth taking a look at.

The color-coding represents their territory coverage today, while the shaded lines mark boundaries between RBOCs at deregulation.

Ironically, there are fewer of them today than there were in 1988. That’s right; for all the effort that went into deregulating Ma Bell, she’s putting herself back together again, Terminator-style.

Consider the southeast and midwest, which has been subject to the greatest amount of reconsolidation. Originally, there were three RBOCs: BellSouth, Southwestern Bell, and Ameritech. BellSouth had the southeast from Kentucky to Florida; Southwestern Bell had the southern part of the midwest from Missouri to Texas; and Ameritech had the Great Lakes region, from Wisconsin east to Ohio.

Today, you’ll find scant evidence of those companies — they’re all parts of the AT&T empire once more, along with the former California and Nevada RBOC, Pacific Telesis. The rest of the nation is basically split between Quest in the West and Verizon in the East.

It’s looking more and more like 1988 will be remembered as the high-water mark for telco competition in the U.S., with a total of eight regional operating companies. Now, we’re down to three.

It’s as though the U.S., with a few years to dull the bad memories of high rates and rented phones, has forgotten what life under a monopoly carrier was like. If we’re not careful — especially with the evisceration of many pro-competition policies in the fallout from USTA v. FCC (2004)1 — we’re going to end up back in some places we’d probably rather not return to.

Footnote 1: One of the best summaries of the issues at play in USTA v. FCC was written in early 2004, before the USSC declined to take up the case. It’s “USTA v. FCC: A Decision Ripe for the Supremes” by Fred R. Goldstein and Jonathan S. Marashlian. Here’s the money shot:

[T]he 62-page decision vacating the Federal Communications Commission’s (“FCC”) Triennial Review Order (“TRO”) can be best described as threatening to gut over 8 years of hard work, sacrifice and the billions of dollars that have been invested by entrepreneurial competitive local exchange carriers (“CLECs”) that are just beginning to create competition in the local telecom marketplace.

Why such a pessimistic analysis? Because unless the DC Circuit’s decision is stayed by the Supreme Court, many of the FCC rules that require incumbent local exchange carriers (“ILEC”) to share key elements of their networks with competitors, the rules which are the foundation of the still nascent competitive local market, will be vacated.

Of course, we know that’s exactly what the Supreme Court did, or rather declined to do; the decision wasn’t taken up for review, the DC Circuit’s pro-RBOC decision stood, and years of progress in bringing competition to telecommunications at the local level disappeared virtually overnight.

0 Comments, 0 Trackbacks

[/politics] permalink

Mon, 02 Jun 2008

One thing just leads to another around here. My search for a decent VoIP ATA (basically, an Ethernet to analog telephone interface box) led me to discover that I’m all out of ports on my current Ethernet switch that holds together my home-office network. Oops. Guess this VoIP project just got a little bigger.

It’s been a while since I’ve bought much home networking gear, and I was impressed when I fired up NewEgg to discover how far prices on Gigabit switches have fallen. But looking at the specs on them convinced me that not all are made equal — and some of them seem downright trashy. I’ve done battle in the past with crummy, low-quality “consumer” networking gear in the past, and swore never to buy hardware purely (or even mostly) based on price again.

My absolute requirements are:

  • 8 ports
  • Gigabit Ethernet (802.3ab) on all ports and uplink
  • Jumbo frames (>9000B payload)

The major ‘nice to haves’ in a new switch are:

  • 12+ ports
  • Support for Spanning Tree Protocol
  • VLAN
  • Link aggregation
  • 802.1p ‘Priority Queuing’
  • Power Over Ethernet (PoE) injection

My requirements aren’t that stringent — pretty much any run-of-the-mill 8-port switch satisfies them — so really it’s an exercise in balancing cost against which of the ‘nice to haves’ I can get.

  • Rosewill RC-410

    • $50 from NewEgg
    • 8 ports
    • Jumbo frames
    • “802.1p flow control” (means priority tagging?)
    • 802.3ad - Link aggregation
    • Limited QoS (per-port QoS bit flagging?)
    • Rosewill seems to be NewEgg’s house brand. It got mostly positive reviews, with the main complaints being about the heat, and that there’s no 12 or 16-port version available.
  • Netgear GS108

    • $55 after rebate from NewEgg
    • 8 ports
    • Jumbo frames (9000B max.)
    • 802.3x - Flow control
    • 802.1p - Priority tags
    • Steel case
    • Looks decent, one of Netgear’s “ProSafe” series. Doesn’t do link aggregation, though, and the price before rebate is $70. However, the higher-end Netgear kit has performed well for me in the past, so that’s something it has going for it.
  • HP J9077A

    • $80 from NewEgg
    • 8 ports
    • Jumbo frames (9216B max.)
    • 802.3x - Flow control
    • 802.1p - Priority tags
    • Full specs on HP site
    • Starting to get into “real” networking gear, rather than the consumer/home-oriented stuff, here. Only downsides to this unit are the lack of VLAN and link aggregation. HP has a similar unit, the J9079A, which does both and a lot of other tricks besides, but only has 10/100 on the client ports and a GigE uplink.
  • Netgear GS108T

    • $105 from NewEgg
    • 8 ports
    • Jumbo frames
    • 802.3x - Flow control
    • 802.1p - “Class of Service” (aka ‘Priority tags’)
    • Port-based VLAN
    • Port and DSCP-based QoS
    • 802.3ad - Link aggregation
    • LACP - Automatic link aggregation
    • 802.1w - Rapid Spanning Tree protocol
    • Now we’ve moved from unmanaged switches into “smart” switches, and we bought ourselves VLANs, QoS, LACP, RSTP, Syslog/SNMP support, port mirroring, and tons of other fun stuff. For what you get, this seems like a good price — the question is just whether it’s necessary.
  • HP J9029A

    • $156
    • This one seems to take the J9077’s feature set and add to it many of the “smart switch” features in the Netgear above, including LACP aggregation, 802.1Q VLANs, and QoS. One major feature it doesn’t seem to support is RSTP/STP.

Decisions, decisions. The J9029A is pretty tempting, but it’s leaning distinctly towards overkill for a home LAN. However, I really like the idea of being able to set up VLANs at some point in the future; say, to take all the VoIP devices and put them on a separate VLAN and subnet, and then put that whole subnet behind a separate NAT router and give it a separate internet-facing IP address. (Obviously this would cost money and require purchasing a second public IP from Comcast.) I’m not sure if this will ever be necessary, but it seems like SIP+NAT is just a bad combination, and the glacial pace of IPv6 means it’s a problem that’s not going to go away any time soon. Being able to just segment off all the telephone stuff from data (and maybe making SAN stuff separate from that) seems like a nice feature.

0 Comments, 0 Trackbacks

[/technology] permalink

I’ve been doing a lot of working from home lately, and that means spending a lot of time on the phone. Since I don’t have a POTS landline, and my cellphone is both expensive to use and tends to run out of batteries just when I need it most, I’ve been thinking that VoIP might make sense.

The main features I need are the capability to have two independent VoIP “lines” — one for me and one for the S.O. — and to integrate without too much fuss into my current LAN. I also don’t want to be tied to a single provider (e.g. Vonage, Comcast, Skype) or buy hardware that will become obsolete too quickly.

  • Linksys SPA-2100

    • Review at VoIPUser.org
    • Two ports for handsets/devices (FXS ports), no analog backup
    • Two 10Mb Ethernet ports (WAN and LAN) with optional NAT routing
    • Uses QoS bit
    • Discontinued; replaced by 2102, below
  • Linksys SPA-2102

    • Info page on VoIP-Info.org
    • 2 FXS ports
    • Two 100Mb Ethernet ports with optional NAT routing
    • QoS
    • T.38 fax transport (this is neat!)
    • Compatible with SPA-2000 dial plans (source)
    • Replaces the discontinued SPA-2100
    • Uses Voxilla configuration wizard
    • Allegedly supports two concurrent G.729 calls
  • Linksys SPA-3102

    • Review
    • One FXS, one FXO (one handset, one analog/POTS backup)
    • Can be configured using the Voxilla.com online tool
  • Linksys PAP2-NA

    • The “NA” version is unlocked, some other versions are locked
    • VoIPUser Review
    • TechZone Review
    • Wikipedia article
    • Official Linksys Page
    • Based on Sipura SPA-2000 (at least generally)
    • Basically a rebranded SPA-2002 (source)
    • Large user community
    • Two FXS ports (device), no FXO (analog backup)
    • Can only do one compressed call (G.726 or 729) at a time
    • Discontinued, replaced by PAP2T, below
  • Linksys PAP2T-NA

After doing a lot of comparison work myself, I found a nice page comparing all of Linksys’ VoIP ATAs. It claims, contrary to other sources, that the PAP2T can handle two simultaneous G.729 calls. I’m finding this more and more doubtful as I read more.

Right now I’m leaning towards the PAP2T, just because it seems likely to have the most people using it in the near future.

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 16 May 2008

A while back I wrote up a little ‘mini-HOWTO’ on connecting to the Internet via a T-Mobile cellphone from a Mac running OS 10.4. (It’s been a while since I’ve tried it, but I think all the information is still current.)

For a bunch of reasons that are well outside the scope of this blog, I had reason recently to try and do the same thing from a Windows PC. Although I’m sure the process makes sense to somebody, I didn’t find it particularly intuitive. Just in case there’s someone else out there trying to do the same thing and struggling, I thought I’d provide pointers to the online resources I found most helpful.

This page from the HowardForums Wiki was one of the most useful and concise. In fact, it seems to be by far the most referenced document on the topic.

Most of the problems I ran into were related to my Bluetooth adapter. Unlike in OS X or Linux, where Bluetooth is handled by an OS component, Windows delegates it to a driver provided by the manufacturer. Like virtually all software produced by hardware manufacturers (scanner software, anyone?), I’ve yet to see one that wasn’t a flaky pile of crap. It’s what you get when you’re viewed as a ‘cost center’, I guess. Once you’ve gotten the phone and computer to pair, you’re about 50% done.

The HowardForums instructions tell you to configure the Bluetooth WAN connection by going into the ‘Network Settings’ control panel; on my system (Dell Inspiron 9400 with onboard Broadcom adapter) this was not correct. The network connection for the Bluetooth device connected using a ‘device’ called a “Bluetooth LAN Access Server Driver”. To configure it, I had to go through the My Bluetooth Places folder, and configure the “BluetoothConnection” in the Bluetooth Properties window. It was in that window (“BluetoothConnection Properties”) rather than in the Network Connections panel, where I could enter the ‘phone number’ used for WAN access.

With that done, the next step is to add the correct initialization string for the APN you want to use. This is all pretty much as the HowardForums article directs. If you are on the low-cost “TZones” plan, you’ll need to use ‘wap.voicestream.com’ as the APN, making the init string at+cgdcont=1,"IP","wap.voicestream.com". You’ll only be able to connect via an HTTP proxy, but it’s six bucks a month (and probably a TOS violation) — what do you expect?

In theory, with the phone number and init string entered, the Network Connection created, and the phone successfully paired to the computer, you’d be good to go. However when I tried to connect, I just got a repeated “Error 692: There was a hardware failure in the modem” error. The ‘Error 692’ problem is apparently not uncommon, and has various solutions that seem to work for different people, with no discernible rhyme or reason. In my case, the problem was due to a leading space that had crept into the init string when I copied it from HowardForums. When that was corrected, I was able to bring the connection up.

It’s so slow that really I’d only consider using it either in an emergency or times of unbelievable boredom, though it does work after a fashion. However, the same procedure allegedly works for EDGE just as well as GPRS, so when I eventually get that EDGE-compatible phone (and get the real data plan), I’ll hopefully be all set.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

Thu, 08 May 2008

As I mentioned a few days ago, Sudhir Venkatesh’s book “Gang Leader For A Day” is at the top of my reading list. Today I finally had some free time and dug in.

My initial reactions: it’s a fascinating book. Two chapters in, I’m definitely hooked. However, I’m not without some reservations. Venkatesh asks his readers to take a lot on faith; the nature of the book requires you to trust that the whole thing isn’t an elaborate fabrication, and that he’s honest both in his observations and recollections.

Although I’m certainly not one to cast aspersions — especially considering that I’m only a couple of chapters in — it would be difficult to fault readers if they decided to approach the book with a grain of salt. It is, after all, a premise that borders on unbelievability: a meek, bookish Sociology grad student at the U of Chicago walks up to a housing project and immediately forms a deep and lasting bond — “a strange kind of intimacy … unlike the bond I’d felt even with good friends” (p.23) — with a gang leader? It’s a hell of a premise.

Also, in terms of research methodology, what Venkatesh is doing is almost quaint, practically to the point of being 19th-century. In some ways, the premise of the book is essentially a ‘white man goes into the bush’ narrative. I’m waiting with bated breath to see how the book deals with this obvious issue, since I’m sure accusations of depicting the ‘noble savage’ in a tracksuit are something Venkatesh must have anticipated. At least, I hope so.

At any rate, the opening chapters suitably grabbed my attention. I have a few lingering reservations and doubts, but I’m certainly sold on reading it through.

0 Comments, 0 Trackbacks

[/other/books] permalink

Wed, 07 May 2008

Alex Steffen has a nice essay on the WorldChanging site where he sums up the problem I’ve always had with some self-described ‘survivalists’ and many ‘apocalyptic environmentalists’:

But real apocalypses are sordid, banal, insane. If things do come unraveled, they present not a golden opportunity for lone wolves and well-armed geeks, but a reality of babies with diarrhea, of bugs and weird weather and dust everywhere, of never enough to eat, of famine and starving, hollow-eyed people, of drunken soldiers full of boredom and self-hate, of random murder and rape and wars which accomplish nothing, of many fine things lost for no reason and nothing of any value gained. And survivalists, if they actually manage to avoid becoming the prey of larger groups, sitting bitter and cold and hungry and paranoid, watching their supplies run low and wishing they had a clean bed and some friends. Of all the lies we tell ourselves, this is the biggest: that there is any world worth living in that involves the breakdown of society.

It’s not the main thrust of the essay (although it’s worth reading anyway), but when I read it, I felt like he’d been reading my mind. It’s easy to look at the range of problems facing the world and fall into despair, or worse, self-hate. And it’s a short step from worrying about catastrophe to actively wishing for it.

Which is not to say that we shouldn’t consider or plan for terrible scenarios, we just need to evaluate them rationally and not fall into the trap of being seduced by doomer porn, and believe that such catastrophes won’t affect us negatively.

We have some major challenges facing us as a civilization in the next generation or two; Sir David Omand, former head of the British National Security Agency, put them into three major groups. There are political threats, including wars, terrorism, and governmental de-stabilization by other groups; there are environmental threats, including the end of petroleum fuels, global warming, and pollution; and finally there are economic threats, including a “meltdown” of the global economy.

Unfortunately it’s rare for more than one of these problems to capture the public’s attention at once. We tend to fixate on one issue — sometimes to the point of obsession, as in the case of the ultra-survivalists and ‘doomers’ — while letting the other ones slide, then get bitten in the proverbial ass and fix our attention somewhere else. It’s important that we keep a steady eye on all the issues, but not get so caught up in any of them that we despair completely.

0 Comments, 0 Trackbacks

[/politics] permalink

One of my favorite Google products is Google Notebook, and one of my more frequent uses of it is to keep track of particularly insightful or pithy posts that I read online. Sure, most sites have their own methods for doing this, but Notebook keeps them all in one place. Unfortunately, I never really end up doing much with all the stuff I save.

Earlier today I found myself reading through some of my notes, and thought I’d share a few. Any one of them could be an entry in itself, but honestly I think there’s little I can add to most of them, so I’ll just point you back to the originals and leave it at that.

On Hillary Clinton’s ‘Prayer Breakfasts’, by MetaFilter’s dw:

[…] Hillary attending the prayer meetings is all about triangulation for her. She knows where the business of the GOP elite gets done, so she’s just going to walk right in there. If they were into watching pre-op trans burlesque while drinking paint thinner, Hillary would show up at the door with a copy of The Crying Game and a gallon of turpentine. […]

boubelium had an insightful quip about the difference between politicians and economists:

[…] if a charismatic politician tells you that he has seen the economic future, he hasn’t. He isn’t smart enough or boring enough to undertake the effort.

“Tom Collins” of Tom Collins’ World Wide Web Log — sort of a ‘Fake Steve Jobs’ of the Beltway, with the best understanding of that milieu on the Internet — sums up everything you need to know:

“Veronica, this is the United States of America. With the exception of short period of reform that lasted about forty years during the last century, the entire history of this country has been nothing more or less than the work of lying, thieving, cheating, amoral, greedy, inhuman scum bags.”
“Which means?”
“That, given the chance, you should always go with the lying, thieving, cheating, amoral, greedy, inhuman scum bags. Do that, and you can’t lose - it’s the American Way.”

On a slightly less cynical note, Vorfeed has one of the better comments I’ve read about the gun control ‘debate’ in a while:

[…] A little less than half of US households (and about 25% of all US adults) own at least one gun, and yet only about 30,000 people are killed by them per year, and more than half of those are suicides. … Criminalizing 25% of the country in order to save 30,000 lives is a terrible trade-off — if saving lives is really the issue, we’d do much better if we built a huge public transportation network and then banned cars. … As far as I can tell, the “gun control debate” in this country serves merely to distract from the actual issue — that is to say, the problem is violence, not guns! Rather than myopically concentrating on the instrument used, both sides of the gun debate could probably benefit from some realistic, holistic thinking about ways to mitigate the root causes of violence.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Thu, 01 May 2008

Just a few quick thoughts on some books I’ve read recently:

  • The Omnivore’s Dilemma by Michael Pollan

I realize I’m about six months or so late to the party with this book; now you can barely mention it in public without a dozen people rolling their eyes at you in boredom. But the number of people you’ll run into who’ve read it, are reading it, or have been berated by their spouses that they ought to read it, is a testament to how important this book is. If you’re one of the 15 people remaining in the country who haven’t heard of it yet, it’s worth your time. (If you’re in a desperate hurry, only read the first half, since it’s the most important.)

At some point I’ll probably write in greater depth about it, but suffice it to say for now that it completely changed my (admittedly ignorant) views on a number of food-related topics. NY Times Book Review

  • The Ghost Brigades by John Scalzi

I’m a big fan of Scalzi after reading his debut novel Old Man’s War. This is a sequel, or at least a follow-up, set in the same universe. I thought it was solid, and would heartily recommend it to anyone who liked OMW.

  • The Android’s Dream by John Scalzi

I picked this up from Amazon at the same time I was ordering The Ghost Brigades, simply because it’s Scalzi’s newest book. It’s a stand-alone, and it has a significantly different pace and tone than OMW/TGB. I think my feelings for it might have been colored somewhat by reading it back-to-back with Ghost Brigades; although The Android’s Dream is good, it’s a bit of a jarring transition. (My biggest thought throughout reading it was that this is what L. Ron Hubbard’s Mission Earth books could have been, if LRH hadn’t been a crazy, racist, homophobic, misogynist with a dearth of talent and a team of religious sycophants instead of an editor. Okay, so on reconsideration it has nothing at all in common.) Overall I don’t think it’s Scalzi’s best work — that’s Old Man’s War, by a mile — but it was certainly better than par for the course.

  • His Dark Materials Trilogy by Philip Pullman

I don’t normally read books out of spite, but this was an exception. I decided to read Pullman’s trilogy only after hearing about the “controversy” it had generated within the Christian Right, on the assumption that anything that pisses off a bunch of thin-skinned religious nutbags must have at least some redeeming value.

As it turns out, I was about half right. I thought the books were pretty decent overall, and significantly milder in terms of content than I’d been expecting based on the boycott threats the movie received. As a treatise on or introduction to humanism it’s not much, but I suppose that’s better than becoming a Randian discourse on the subject.

The other accusation I’ve heard leveled at the series — namely that it promotes or condones age-inappropriate sexual behavior — doesn’t seem to stand up, either. Without going into great detail on the plot, I’ll just say that the author certainly doesn’t venture (at least in literal descriptions) anywhere that wouldn’t be rated “PG-13”. If readers look between the lines and see more than that going on, that’s really their own business — and really says more about them than it does about Pullman.

Overall I don’t think there’s much reason for adults to pick up the series, unless you interact with younger readers or just want to keep tabs (as I do) on whatever has the far-Right’s panties in a bunch this week. I’d recommend the books with limited reservations to most open-minded junior-high/highschool age readers or their parents; the only real exception would be to students who’ve already pressed on to more complex speculative fiction. (Personally, I can only imagine my younger self being impressed by His Dark Materials if I’d come across it before I’d discovered Heinlein; after that I think it would have seemed a bit tame.)

And in no particular order, my current reading list for the next few months:

  • Gang Leader for a Day by Sudhir Venkatesh
  • In Defense of Food by Michael Pollan
  • The Botany of Desire by Michael Pollan (if it’s not already evident, I have a Michael Pollan fan in the house)
  • The Last Colony by John Scalzi
  • Dreaming in Code by Scott Rosenberg (hat tip to MetaFilter’s Drezdn)

I’m especially looking forward to reading Venkatesh’s book, since I found the chapters discussing his work to be the most interesting parts of Freakonomics; I’m curious to see if his conclusions are the same as Dubner and Levitt’s.

0 Comments, 0 Trackbacks

[/other] permalink

I’ve been using Subversion a lot lately, and for the most part I’m pretty floored with it. It’s a huge step up from CVS, and it offers a lot of flexibility, beyond what I’ve ever seen in commercial version-control products. Plus, you can’t beat the price.

However, there are a few things that have irked me. One of the biggest is that SVN doesn’t preserve filesystem metadata, particularly document modification times. Apparently this is by design. (‘Why’ isn’t exactly clear, but supposedly has to do with automated build tools.) But to me, filesystem metadata — modification stamps in particular — is fairly important, and I’m not really happy with any tool just blithely throwing it away, as SVN does when you import a folder into version control and then check out a working copy.

As a sort of half-assed solution, I wrote a couple of little scripts to pull the file access and modification times from the filesystem, and store them in SVN as “properties” associated to that particular document. (Since Subversion lets you store as many key:value pairs for each document as you’d like, in many ways it’s superior to most commonly-used disk filesystems … it just doesn’t bother putting much stuff in there by default. Bit of a wasted opportunity.) Although this isn’t as useful as having it actually in the filesystem, it at least ensures that no metadata is destroyed when you load files into version control. To me, the idea of not ever destroying data or context information is important. I like knowing that if I ever need to know the last modification time of a document prior to loading it into version control, it’s all there.

Due to the mechanics of Subversion, the use of these scripts is a little roundabout. It’s a multistep process:

  1. Import the directory you want to version-control into the Subversion repository. Don’t delete it!

  2. Checkout the directory, giving it a name different from the ‘original’ copy. (I like to name it something like “directory-svn”.)

  3. Copy — using your preferred CLI or GUI method — all the files from the old, non-version-controlled directory to the working directory. Clobber all the files in the working directory.

    [Why? This overwrites all the files in the working directory — which have their atime, ctime, and mtime set to whenever you checked the directory out (not really that useful) — with the original files, which have useful timestamps on them that actually correspond to the data in the logical files.]

    N.B.: You need to copy the files from one directory to another; don’t overwrite one directory with the other. If you do the latter, you’ll wipe out the “.svn” directory in the working directory, and it’ll no longer be a functioning SVN checkout.

  4. Now that you have a version-controlled working directory full of files with useful timestamps (run ‘ls -al’ if you want to check; that’ll show you the mtime), you can run the script below. This will take the ctime, mtime, and atime and copy them into SVN properties (named “ctime”, “mtime”, and “atime” respectively). Run ‘svn commit’ to write these changes to the repository.

  5. When you check out the working directory onto a new computer, you still won’t have the right metadata actually written into the filesystem, but you will have it in the properties. To view the properties associated with a file, run ‘svn proplist —verbose filename’.

Not as good as if SVN just respected and didn’t destroy filesystem metadata by default, but it’s better than nothing. On the system that originally housed the data, your files still have all the correct values stored in the filesystem (since we copied them from the old, non-version-controlled directory), and on other systems, you’ll be able to retrieve the file’s original timestamps using ‘proplist’.

Here’s the script for Mac OS X (and probably BSD?):

#!/bin/bash
# A little script to take modification date/time and stick it
# into a Subversion property

for file in *
   do
   mtime=`stat -f %Sm "$file"`
   svn propset mtime "$mtime" "$file"
   ctime=`stat -f %Sc "$file"`
   svn propset ctime "$ctime" "$file"
   atime=`stat -f %Sa "$file"`
   svn propset atime "$atime" "$file"
done
exit 0

And on Linux it’s the same, except the syntax differs slightly:

for file in *
do
   mtime=`stat --format %y "$file"`
   svn propset mtime "$mtime" "$file"
   ctime=`stat --format %z "$file"`
   svn propset ctime "$ctime" "$file"
   atime=`stat --format %x "$file"`
   svn propset atime "$atime" "$file"
done

At the moment I’m just concentrating on archiving some of my documents and shoving them into SVN — this has the advantage both of getting them in version control, and also putting them on a central server where I can easily back them up — so I’m satisfied with just sticking the original file’s timestamps into SVN properties for archival purposes. Obviously, the stamps don’t get updated as you modify the file, so they’re really just for historical purposes.

What would be nice would be to fix Subversion so that on import, it collected as much metadata information as it can about a file and stuck it into the properties, and then took this information and used it to recreate the files on checkout (only if you wanted it to, of course, or perhaps if the file only had one version in the repo, meaning it hadn’t been modified since being added). That’s a bit beyond both my abilities and level of interest at the moment, but it seems like a useful feature, particularly as more and more non-programmers start to discover Subversion and how useful it can be for managing home directories and other lightweight content-management tasks.

3 Comments, 0 Trackbacks

[/technology] permalink

Fri, 18 Apr 2008

After reading about game designer Steve Gaynor’s bet about the art of interactive games in half a century, I wrote my thoughts in a MetaFilter comment. In it, I made my own prediction:

In 50 years, I fully expect interactives to be defying comparison to any other art or entertainment form, except maybe hallucinogenic drugs. Of course, there will still be “games” in the way we think of them today, because people like light entertainment and they’re fun. But I also think there will also be computer-mediated ‘experientials’ that involve you going into a room and sitting down, and coming out three or four days later wondering how bad the flashbacks are going to be.

I really don’t think it’s that much of a stretch.

Nobody would question today that a filmmaker at the height of his craft can provoke an intense emotional response from his audience; in fact, the ability to do so might be the greatest indication that a filmmaker is worth his salt. But really, a film is just a series of rapidly flashing still pictures accompanied by a pre-recorded soundtrack. It’s not interactive; if anything it’s impersonal: close your eyes or walk out of the room and come back, and it will have moved on without you. It’s just a recording. That we can be so emotionally affected by movies — brought to heart-pounding excitement or to tears — is a testament to our ability to concentrate on and immerse ourselves in artificial worlds through limited sensory input.

Interactive media have the possibility of being so much more than they currently are, which is just barely approaching the narrative depth of film. Most modern games — and yes, I know, there are exceptions; but most major-market ones in the US — trade on only a few emotions: largely fear, excitement, surprise, and anger. It’s an understatement to say that there’s a huge amount of headroom for improvement.

Future games could combine this creative, narrative latitude — the stuff of the very best film cinema — with the immersiveness and interactivity of games, and I think the results could be really astounding.

What’s missing, today, is the audience. It’s hard to make a big-budget game that doesn’t fall into a couple of well-defined categories (first-person shooter, maybe god-mode RTS, or open-world third-person explorer) and cater towards a market dominated by a young, male demographic. This is largely because older consumers didn’t grow up with video games, or grew up with games that were so primitive — arcade-style “twitch” games — that they don’t take them seriously as anything but momentary entertainment, and thus aren’t willing to spend the money to purchase a platform capable of delivering high-quality interactive entertainment. That’s something that will almost inevitably change in the coming decades, as people who have grown up with narrative games get older and push the boundaries. As long as Moore’s Law holds, the capabilities that a designer can bring to bear to tell a particular story for a certain amount of budget will probably only improve over time, as well.

Time to set a reminder to come back in 50 years and see how we all did.

0 Comments, 0 Trackbacks

[/technology] permalink

Just in case anyone thought that mind-boggling ignorance and gross stupidity was restricted to members of the U.S. government and civil service, this story out of Russia, reported by Ars Technica will disabuse you of the notion. Apparently they want to impose a mandatory registration and licensing regime on all consumer Wifi gear, under penalty of confiscation:

[T]he government agency responsible for regulating mass media, communications, and cultural protection has stated that users will have to register every WiFi-enabled device with the government […] registration could take as long as ten days for standard devices like PDAs and laptops and […] it intends to confiscate devices that are used without registration.

The Ars story references a Russian source, Fontanka, but it’s (unsurprisingly) in Russian.

Although it’s easy to go for the censorship-conspiracy angle, I’m not sure that there’s as much evidence for that interpretation as there is for plain old public-sector incompetence:

The Fontanka.ru article quotes an industry specialist who points out that the government agency behind the policy is run by a former metallurgic engineer who likely has no clue about many of the technical issues overseen by his organization.

It’s almost heartwarming, how much we have in common.

0 Comments, 0 Trackbacks

[/politics] permalink

The Financial Times has a very interesting article on the relationship — or in this case, lack thereof — between population growth and prosperity. It astounds me a little that any of their findings would be surprising to a first-worlder in 2008, but I’ve heard enough people lament the population decline in Japan and Western Europe that this obviously isn’t the case.

There are two important lessons here. One is that we should always look at per capita, rather than overall, production when measuring the success or failure of various economic policies. Any policy that produces a higher GDP at the expense of a lower per-capita figure is stupid, since it’s the per-capita figure that’s linked most intimately with standards of living. Lesson two is that policies that are based on continuous population growth just aren’t sustainable, and we need to get rid of them (or at least rethink them) before we hit the inflection point and they become untenable. What we need not to do is view the population decline itself as a problem, because it’s not. It’s taking population growth as a given that’s the mistake.

Countries with declining populations, or with populations that may begin to decline soon, have a unique opportunity to consolidate standards-of-living gains and create new social structures that aren’t predicated on pumping out offspring (and consuming non-renewable resources) by the bushel-basket. This is nothing but good for people living in those areas, provided the transition is managed thoughtfully.

0 Comments, 0 Trackbacks

[/politics] permalink

Wed, 02 Apr 2008

COBOL: (Synonymous with ‘evil’.) A weak, verbose, and flabby language used by card wallopers to do boring mindless things on dinosaur mainframes.

[from the Jargon File]

Given many C and LISP hackers’ opinions of COBOL, it’s perhaps unsurprising that it’s one of the least-mature languages on Linux. While C has a compiler (gcc) that rivals some of the best commercial implementations, I’ve had nothing but frustration so far as I’ve tried to get a working COBOL compiler running.

There are, as far as I can tell, two COBOL compilers that seem like they might be useful for basic testing and development work: TinyCobol and OpenCobol. TinyCobol compiles COBOL and produces GNU Assembly, which are translated into machine code by the GNU Assembler; OpenCobol translates COBOL to C, which is then compiled by gcc.

Not being a C programmer — meaning that one of the benefits of OpenCobol, the ability to debug your COBOL program in C, wasn’t particularly useful to me — I decided I’d give TinyCobol a shot first.

Although there are references around the ‘net to binary packages of TinyCobol, there wasn’t any evidence of one for Debian on TC’s website. Hoping — perhaps naively — that something called ‘Tiny’ wouldn’t be too much of a bear to compile myself, I grabbed the sources and dove in.

Although I didn’t have any problems in configuration, as soon as I went to run ‘make’, the errors began. The crucial one seemed to be:

 gcc -I/usr/include -I/usr/local/include -I../lib -I../ -c scan.c
 scan.c:1062: error: syntax error before 'YY_PROTO' 
 scan.l:122: error: syntax error before 'switch'
 ...

After that, things just fell apart. I played around with it for a few hours, trying and retrying, checking all the dependencies, but to no avail. I even went so far as to try it on a brand-new Dapper installation running in a VM, just to make sure something about my system wasn’t poisoning it. Nope.

So after giving up — at least for the moment — on TinyCobol, I decided to give OpenCobol a try instead. Although OpenCobol apparently has a package for Ubuntu Edgy, there’s currently no backport to Dapper, so I was left again with the unappealing alternative of building it myself.

I got the OpenCobol sources and its dependencies installed easily enough, and ran the ./configure script without problems. Looking good so far. But as soon as I typed ‘make’ and started to actually build it, I was filled with a little deja vu:

fileio.c:308: error: syntax error before 'DB'

Followed by several pages of ‘incomplete type’ errors. So much for that. A quick Google for the error didn’t reveal anything, and since I’m not a C programmer, that’s pretty much the end of the line. (There’s a reason why I normally have a blanket rule against any non-trivial software that requires compilation. The number of times I’ve tried compiling some large software package and actually had it work without deal-breaking problems is very, very small.)

I’m tempted to take this as some sort of cosmic sign; the revenge of all those scoffing C and LISP greybeards on their COBOL cousins. Linux — at least my Linux machine — just doesn’t seem to want anything to do with it.

Anyway, should anyone else out there find a way of running TinyCobol, OpenCobol, or some other COBOL compiler on Ubuntu Dapper (before it goes out of support and I’m forced to upgrade anyway), I’m all ears.

At the moment I’m torn between just giving up completely on Linux for this purpose and looking for a working COBOL implemetation for Win32, and feeling like since I’ve already put a day’s worth of work into this, I ought to keep banging on it and see if I can get either TC or OpenCobol working on Ubuntu Edgy or one of the other new versions. I think I’ll probably start downloading a new Ubuntu LiveCD while I look for Windows tools, and see which one I get working first.

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 05 Mar 2008

This is just a quick entry to point out a very nice, helpful, HOWTO-style guide on QuietEarth.us that goes through the process of setting up syslog-ng to receive remote log entries from another device on the local network.

In my case, as in the author’s, I wanted to send the logs produced by my gateway/router running OpenWRT to a Linux box with plenty of storage for later analysis. Although this can be done with the stock — and ancient — sysklogd, it’s as good an excuse as any to install syslog-ng, which is much more flexible. Installation on Ubuntu Dapper is painless, and with a few lines of configuration you can have your router’s (or other device’s) logs sent to a central machine, filtered, and logged into its own file.

I can vouch for the instructions in the article as working perfectly on Ubuntu 6.06.02 LTS and an OpenWRT router. (Enabling log transmission on the router requires enabling the syslogd service under the ‘Administration’ tab, ‘Services’ subtab.)

The logical continuation of this is to transmit logs not from two computers on a LAN using UDP, which is the standard method, but over the Internet using TCP — encrypted, of course. This article seems like just the thing, and I’ll probably be playing around with it more in the future.

0 Comments, 0 Trackbacks

[/technology/poweredge] permalink

Tue, 04 Mar 2008

I got the PowerEdge booted up and working yesterday, with only a few hiccups here and there. The biggest problem I had was getting into the PERC2/SC’s configuration menu from the BIOS; you have to press the right key at exactly the right time, or it won’t work. (Also, it turns out the ‘2300 has three SCSI controllers in it; two on the motherboard, and then the PERC2/SC on PCI. The internals are both Adaptec non-RAID.)

Once into the PERC’s configuration, setting up the 4 drives I had installed into a RAID-5 was trivial and took only a few moments to format. The software also makes it look like it’ll be easy to add more drives and expand the size of the RAID volume later, or even add a separate striped set using the remaining two slots in the backplane. (I doubt I’ll have much use for the latter but it’s good to have the option.)

Ubuntu 6.10.02 LTS installed without any significant trouble onto the RAID volume; I chose a LVM install in order to give me some more flexibility later when I expand the array. The ~220GB RAID volume, which Linux sees as /dev/sda, is partitioned into a small /boot (250MB) with the remainder given over to LVM as a ‘physical volume.’

LVM is a pretty slick system all by itself and deserving of a separate article, just for the basics, but I’ll hold myself to saying that it gives you a ton of options. Basically, LVM introduces an additional layer of abstraction between filesystem devices as they’re seen by the OS (/dev/sda1, sda2, etc.) and the actual disks or on-disk partitions. When you use LVM, the actual disks or partitions become “physical volumes” (PVs), which you pool into “storage groups,” and then assemble together in various ways to create “logical volumes” (LVs). In my very simple setup, I just let the Ubuntu installer create one 200GB PV, put it into one storage group, and make one LV, the root partition, out of it.

In retrospect, I should have spent some more time in the installer and made some more LVM LVs; separate ones for the traditional Linux partition scheme. This is because while LVM makes it easy to resize LVs after the fact, most filesystems don’t support shrinking, only growing. It’s easy to make a 5GB partition bigger if you run out of room, but it’s much harder to take a 200GB one down to 5GB. So I’m basically stuck with everything in the big / partition, at least until I add more disks and have some more space to work with.

With the system now running and a minimalist ‘server’ installation of Ubuntu installed, the next step was to install software. The only hitch here was noticing that, for some reason, the SMP kernel hadn’t been installed. I know this was originally by design, but I thought it had been fixed in 6.10. No matter: a quick sudo apt-get install linux-686-smp followed by a reboot, and everything was good.

All in all, not bad for a (nearly) free box. It’s not the fastest thing in the world, but it has the right features, and I think it’s solid enough to serve me for a good long time.

0 Comments, 0 Trackbacks

[/technology/poweredge] permalink

Fri, 29 Feb 2008

The Poweredge project is currently held up for want of screws. Specifically, twenty-four #6-32 x 1/4” flat-head machine screws.

They’re needed to mount the SCSI drives into the hotswap trays (procured on eBay for a few dollars each); standard hard-drive mounting screws — which are almost always #6-32 pan-head — won’t work. The drive trays have the holes for the drive-mounting screws countersunk into the plastic sides of the trays, since they have to fit absolutely flush. (Pan heads will hold the drives into the trays, but the protruding heads prevent the tray from sliding into the hotswap bay, as I found out to my chagrin.)

A handful of machine screws ought to be an easy hardware-store purchase, but unfortunately, finding a really good hardware store — the kind of place with drawers upon drawers of nuts, bolts, and other small parts, as opposed to the more common “home improvement” store — is right up there with finding a good typewriter repairman. They exist, but they’re few and far between.

After making some phone calls, I found a winner in Fischer Hardware of Springfield, VA. When I called to ask about screws, they cheerfully informed me that not only did they have 6-32 x 1/4” screws (spoken with a tone that seemed to imply “of course, dummy, we have #6-32 machine screws…”, truly music to my ears), they had them in my choice of stainless steel, brass, or zinc, in both Phillips or flat drive, how many did I want of each? Now that is the sign of a decent hardware store.

So tomorrow I’ll drive over there and see about picking up a couple dozen, and then I think I’ll finally be ready to boot the beast up.

0 Comments, 0 Trackbacks

[/technology/poweredge] permalink

Sat, 16 Feb 2008

The 2300 is an interesting (and large, and heavy) beast. It’s all SCSI — no IDE here — and has both an onboard U160 channel and the option to add a hardware PCI RAID controller. Mine has that option (called the “PERC 2/SC”) installed, and connected to a 6-bay front-loading hotswap backplane for SCA2 drives. Unfortunately, all the drives had been pulled, along with their sleds, when I bought it. Bummer. (I understand not leaving the drives in a surplused machine, but really, taking the sleds? That’s a bit low.)

A quick peek inside showed that it was full of RAM — exactly how much I couldn’t determine, since the chips didn’t specify and the part numbers didn’t bring up any useful information when Googled — and had a single 550MHz PIII processor installed.

Since the machine has two slots, my first search was for an extra PIII processor to fill it out. eBay quickly came to the rescue; for less than a measly $5 (and that’s with shipping), I had a second processor.

A little more Googling turned up some good deals on SCA2 U160 hard drives; unfortunately not as inexpensive on a per-MB basis as modern ATA disks, but dirt cheap compared to what they went for only a few years ago. I opted for 4 73GB 10k RPM Seagates to start with, enough to set up a decent RAID-5 array, while still leaving some room for additional expansion later.

On the OS front, I’m still not sure whether I want to go with BSD — probably OpenBSD, since I have an official CD set, bought mostly on impulse a while back — or Linux. I’m more comfortable in general with Linux, and I feel like I’ll be able to do more with the server if it’s running Linux, but I’ve been looking for an excuse to delve more into BSD and can’t decide if this is when I should take the plunge or not.

0 Comments, 0 Trackbacks

[/technology/poweredge] permalink

Thu, 14 Feb 2008

I was playing around earlier today, trying to find the slickest one-line command that would back up my home directory on one server to a tarball on another. (If I didn’t care about making the result a tarball, rsync would be the obvious choice.) I started to wonder whether it was possible to run tar on the local machine, but pipe its output via SSH to a remote machine, so the output file would be written there.

As is so often the case with anything Unix-related, yes, it can be done, and yes, somebody’s already figured out how to do it. The command given there is designed to copy a whole directory from one place to another, decompressing it on the receiving end (not a bad way to copy a directory if you don’t have access to rsync):

tar -zcf - . | ssh name@host "tar -zvxf - -C <destination directory>"

Alternately, if you want to do the compression with SSH instead of tar, or if you have ‘ssh’ aliased to ‘ssh -C’ to enable compression by default:

tar -cf - . | ssh -C name@host "tar -vxf - -C <dest dir>"

But in my case I didn’t want the directory to be re-inflated at the remote end. I just wanted the tarball to be written to disk. So instead, I just used:

 tar -zcf - . | ssh name@host "cat > outfile.tgz"

There are probably a hundred other ways to do this (e.g. various netcat hacks), but this way seemed simple, secure, and effective. Moreover, it’s a good example of SSH’s usefulness beyond simply being a glorified Telnet replacement for secure remote interactive sessions.

0 Comments, 0 Trackbacks

[/technology] permalink

When it comes to geeky stuff, at heart I’m a hardware guy. I’m reasonably proficient at software configuration, and I can bang out a shell script or a little Python if there’s a need, but hardware has always struck me as more intuitive. Had I been born a bit before I was, I’d probably have become more interested in cars rather than computers, but sadly modern cars are fairly difficult to work on. Plus, mass production and Moore’s law, together with the ‘upgrade treadmill’ perpetuated by hardware and software vendors, have conspired to create an enormous, basically everlasting supply of IT junk, just waiting to be messed with and put to good use. As cheap hobbies go, as long as you restrict yourself to nothing that’s less than 4 or 5 years old, it’s about one step up from ‘trash art.’

So it was with that in mind that I found myself at a seedy self-storage facility last week, loading my latest acquisition into the back of my car. Via a corporate-surplus website, I’d picked up an old Dell Poweredge 2300 server for next to nothing. (Arguably, anything more than free is too much, but I was willing to pay a little to get one that was known to work.)

Over the next few weeks I’ll be playing around with it, with the eventual goal of setting up either BSD or Linux on it, and putting it to some sort of productive use (probably a backup server, if I can get the RAID system working) in my home LAN. Since information on the 2300 seems to be fairly limited, and there also seem to be a lot of them turning up on the used/surplus/come-get-it-on-the-curb market, I’ll periodically make updates with anything interesting I’ve found, and general progress.

0 Comments, 0 Trackbacks

[/technology/poweredge] permalink

Fri, 08 Feb 2008

[Prompted by this MetaFilter discussion.]

New technologies create new ways of communicating, thinking, and producing, but they also inevitably create new ways for con-men and hucksters to make an easy buck. Email brought us instantaneous, nearly zero-cost global communication; it also brought us spam. Webpages and search engines brought us more information at our fingertips than ever before in history; it also brought us domain squatting, tasting, typosquatting, blog and link spam. It’s an iron law of human nature that wherever there is a way to take advantage of a system for profit, someone will do it.

Amazon.com is poised to make the so-called “long tail” of book publishing available to all of us, by allowing ‘print on demand’ publishers to list their books in Amazon’s online catalog, and then print the copies individually, whenever an order comes in. It’s an idea with a lot of promise: by eliminating overhead, PoD allows books on incredibly niche subjects — which traditionally would have had a single short-run printing and then gone out of print, or not been printed at all — to stay available and in print.

But now this technology has found its own problem, eerily reminiscent of email’s spam and the web’s ad-ridden pages: automatically-produced ‘books’ consisting of database dumps on a particular subject. Like typosquatters who buy up thousands of domain names, knowing that it only takes a few ad hits to recoup the cost, or an email spammer who sends out billions of messages knowing only a few will lead to sales, a ‘titlesquatter’ can create thousands of ‘books’ in a database like Amazon’s, each on an almost ridiculously-niche subject. If an order comes in, the information is quickly assembled from publicly-available sources and the tome is sent out.

Phillip M. Parker, a professor of marketing at INSEAD, seems to be taking this route. He has over 80,000 books listed on Amazon, on subjects ranging from obscure medical conditions to toilet-bowl brushes. According to a Guardian article, they are written by a computer, at a rate of approximately 1 every 20 minutes.

Although some of the books do get positive reviews (not that this is saying much; Amazon’s review system is anything but unbiased), even the books’ supporters note that they are mainly compendia of Internet sources. This review, on “The Official Patient’s Sourcebook on Interstitial Cystitis” which retails for $24.95, is fairly representative:

I was very disappointed when I reviewed this book. It was almost as if the author(s) went to a search engine, and the NIH’s Medline, and the National Library of Medicine (PubMed) did a search for IC then made a book out of the results. … In my opinion, just a few hours on the web “today” will yield more current and useful information than that provided by this book. For those seeking information on IC, I suggest a search on “google.com” instead.

Others are more blunt:

The is downloaded copy of the NIAM website, and a list other research websites. I learned more from Google.

Although there may be a place and a market for ‘sourcebooks’ of this type, when they are clearly described and marked as being machine-written or -compiled, judging from the reviews it seems as though many consumers are purchasing them expecting more, and are consequently disappointed. This is bad news for print-on-demand, and the ‘long tail’ in general: if Amazon and others do not work to keep the content of their catalogs high, consumers may learn to mistrust anything that’s not highly ranked in sales numbers. PoD already has a poor reputation within the publishing industry, and if machine-generated books with plausible-sounding titles become more common, to the point where users have to sort through dozens of infodump ‘sourcebooks’ to find one offering new information, the situation could get far worse. At worst, it could turn users away from reference books completely — why bother buying reference books, if the majority of them just reprint what you can find in an online search anyway?

Although nothing that Parker is doing is illegal or even contrary to Amazon’s current policies, it makes sense for Amazon and other retailers that catalog PoD books to nip this behavior in the bud, before it becomes a full-fledged epidemic. If there’s anything that we should have learned from email and web spam, it’s that what begins as an oddity and an annoyance can quickly become a major waste of time and resources.

0 Comments, 0 Trackbacks

[/technology] permalink

Conservative political strategist and blogger Patrick Ruffini has an interesting insider’s take on the fatal flaw of the Romney strategy. It was written on February 2nd, and seems even more relevant now — with Super Tuesday in the rear-view mirror — than it did then.

Huckabee and McCain represent two very distinct sides of both the Republican party and the ‘conservative’ movement in general. Huckabee is traditional and appeals to the base; McCain appeals to moderates and fence-sitters. That they are fundamentally different candidates is well-understood; this has basically been the nature of the Republican party since 1980 or so, and candidates’ overall success has basically been measured by how well they reconcile these two groups.

Enter Mitt Romney: onetime moderate, blue-state governor, Yankee Republican, entrepreneur. Realizing perhaps that it would be impossible for him to ‘out-liberal’ McCain without opening himself to accusations of being the Republican answer to Joe Lieberman, he made the strategic choice to place himself to the right of McCain and compete instead for the social conservative vote.

I thought and continue to think that this is a move requiring a whole lot of cojones. I’m not sure it was a good move, but you have to at least appreciate the inherent audacity. In theory, it’s pretty brilliant, but as good old Carl von Clausewitz once said, “Theory becomes infinitely more difficult as soon as it touches the realm of moral values.”

McCain is the Coca-Cola of GOP candidates, always performing at a consistent 30-40% … McCain does well in swing counties and liberal-leaning metro areas, but surprisingly, he doesn’t tank in rural, Evangelical areas. But Romney does.

My suspicion right now is that history will remember Romney’s bid as an interesting, but ultimately unsuccessful, gamble. What he probably could have been best at — wooing moderate voters and staking out a reasonable plank on both social and fiscal issues, backed with lots of past performance — was crushed as McCain and Obama both moved towards the center from opposite directions.

EDITED TO ADD: Romney dropped out earlier this afternoon, but has currently not pledged his delegates to any other candidate.

0 Comments, 0 Trackbacks

[/politics] permalink

Thu, 07 Feb 2008

If you’re a Mac user, even an occasional one, and have been waiting with bated breath for a version of TrueCrypt, the wait is now over. (Okay technically it was over on 2/5, two days ago.)

TrueCrypt 5.0 includes Mac OS X native versions for both Tiger and Leopard on both PPC and Intel architectures, and the files it produces are binary-compatible with TrueCrypt for Linux and Windows.

Its use is not quite as straightforward as Apple’s Disk Utility, but in return it offers a far greater array of features, plus compatibility that Apple’s proprietary encrypted .dmg format lacks.

One of the most widely-touted features is the ability to create invisible ‘hidden volumes’ within the free space of other encrypted volumes. Another is the choice of ciphers; while Apple supports AES, TrueCrypt offers AES, Serpent, Twofish, and combinations thereof (any two or all three at once, operating in serial on the same blocks). It also allows a choice of three hash algorithms, including the openly-developed RIPEMD-160.

Just for test purposes, I created a 4GB volume using Twofish and RIPEMD-160; actual volume creation on a Dual 2GHz PPC G5 ran at about 9.5MB/s. Copying to it seemed to be around 3MB/s on average, with excursions up to around 5-6MB and periodic short stalls. Overall, a 600MB file took about 4 minutes to move onto an encrypted volume.

One of the only missing features in the Mac version is the inability to create sparse files that expand in size as they are filled. (This is possible with .dmg files although it requires the use of the command-line ‘hdiutil’ to do it.) I’m not clear on the details but it sounds like TrueCrypt’s sparse file support relies on the NTFS filesystem. But given the problems I’ve had with sparse files in the past (they get easily mangled when copying across filesystems and various OSes), and the low cost of storage, I’m pretty content sticking with static files.

Overall, this is a big win both for Mac users and TrueCrypt users in general, since it makes the product that much more flexible overall. As an encrypted container format I think TrueCrypt is fast becoming the de facto standard, and now you can put a FAT-formatted .tc file on a USB stick and be pretty much assured that it will be readable no matter where you go.

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 05 Feb 2008

I’ve wondered for a while if it’s possible to construct a swappable external hard drive by putting one of those cheap removable IDE drive drawers inside a 5.25” external FireWire enclosure, but not enough to actually go out and buy all the parts and then have to return them when it didn’t work. However, thanks to the wonder of the Internet I was recently able to pick up both pieces for under $20 from the clearance section of Geeks.com.

Short answer: it actually works. (Even spindown.) For twenty bucks plus shipping and a spare IDE drive, you can make yourself a functional analog of a Quantum GoVault, suitable for all sorts of disk-based backup tasks.

In retrospect, I’m not sure why I thought it wouldn’t work. I think I was assuming far more complexity on the part of the IDE drive drawer than actually exists; I figured they were an actual backplane with some logic to let you hot-swap the drive. In reality they’re nothing more than a big Centronics connector for the power and IDE connections, a few LEDs, a fan, and a key-operated on/off switch to keep you from ripping the drive out while it’s spun up. It doesn’t get much more basic than that. Once you have the drive inserted and locked, the FireWire bridge is none the wiser.

Where things get messy is if you actually try to hot-swap the drive while it’s in use. Because the FW bridge isn’t designed for hot swapping on the IDE side, bad things tend to happen when you remove the drive and reinsert it while the FW bridge is on. However, as long as you power off the entire external enclosure (rather than just the drive), then swap drives, then power back on, everything works nicely. I think of it as sort of a ‘warm-swap.’

I prefer the drive-tray-and-enclosure solution to just buying multiple 3.5” enclosures because the 5.25” one — unfortunately no longer available from Geeks.com but easily found elsewhere under its model number, “PM-525F2-MOS” — has significantly better heat characteristics than most cheap 3.5”s (it’s aluminum and has a fan, for starters; the unvented plastic ones eat drives for breakfast), and the cost per additional drive is much lower.

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 30 Jan 2008

I spent a while explaining Spamgourmet to some coworkers today. It amazes me a little that more people aren’t aware of it, and that it gets mentioned so seldom in the popular and trade press.

Lots of people understand the benefits of having multiple email addresses; one that you keep to yourself and give only to trusted friends, and another that you use more widely (for site signups and for doing business with companies that you know are likely to spam you). Spamgourmet takes this concept further and allows you to create a basically-infinite number of disposable addresses. Instead of just having one ‘untrusted’ address, you can have one for each skeezy company you have to give a working address to.

This is pretty cool, because it allows you to turn addresses on and off at will. You can have an address that only allows emails in from one domain or address, or only works for a specified number of messages, silently ‘eating’ everything else.

The best part is that Spamgourmet lets you look at your list of addresses and see which ones have recieved the most spam. If you give out unique addresses to each company, it’s trivial to see exactly who sold you out. (Worst offenders: sketchy PayPal clone “ChronoPay,” followed by a litany of UBB-based forums. A plague on both your houses.) It’s pretty awesome to look in and see that you’ve been spared 50,000 spam messages over the course of 4 years, thanks to the service.

Did I mention that it’s free? (Really, no-strings-attached, no advertising, we-don’t-want-your-money kind of free.)

It’s one of the few things that I flat-out recommend to everyone. It really has no downside. It takes a few seconds to set up, and can keep your inbox from being overrun for years to come.

2 Comments, 0 Trackbacks

[/technology/web] permalink

Bruce Schneier has an excellent short essay on the latest fallacy being parroted by the ‘homeland security’ apparatchiks: ‘security versus privacy.’

Security and privacy are not opposite ends of a seesaw; you don’t have to accept less of one to get more of the other. Think of a door lock, a burglar alarm and a tall fence. … The debate isn’t security versus privacy. It’s liberty versus control.

The idea that security and privacy are at either ends of a spectrum, that some tradeoff is always required or a ‘balance’ always struck, is, he argues quite convincingly, completely false. Most good security actually increases privacy, rather than diminishes it.

The problem is conflating ‘security’ with ‘control.’ People who have spent too long in government, or other organizations with strict top-down management styles, apparently think that the only path to security involves giving them control of everything. It’s the worst kind of paranoid micro-management, and it’s directly at odds with democracy, which is not a top-down organization — quite the opposite.

It’s the mindset that imagines that the easiest way to prevent aircraft hijackings is to compile dossiers on every passenger aboard, rather than working to make the planes harder to hijack. It’s the mindset that wants to check for IDs and confiscate shampoo rather than screen for threatening behaviors that match actual terrorist profiles.

The worst part, the biggest irony of it all, is that this ‘security’ doesn’t even work very well. It creates inflexible chains of command, concentrates vulnerable points of failure, and tends to be reactive rather than proactive. It wastes resources and distracts from the real issues. And that’s just the tip of the iceberg.

However, most people have heard the security/privacy dichotomy so many times that they’ve come to accept it as truth, even if there’s not really anything behind it. It has the ring of truthiness to it. That’s why it’s so dangerous.

0 Comments, 0 Trackbacks

[/politics] permalink

Sat, 26 Jan 2008

If you needed any more evidence that we are doomed as a civilization, this ought to do it.

Yes, that’s right: for just eighty-two dollars and ninety-nine cents (plus $2.95 shipping!), the “iPod Stereo Dock Speaker and Bath Tissue Holder” can be yours.

And just in case the idea of a toilet-paper holder with an iPod dock isn’t good enough for you, it gets better. It’s portable. Yes, you can remove it from the wall mount, fold away the toilet-paper mounting ears, and use it as a portable speaker system. Because really, who wouldn’t want to carry around a toilet-paper holder?

From the list of features:

  • Perfect way to enjoy your favorite music in ‘any room’
  • iPod dock features four integrated high performance moisture-free speakers for fine clarity and sound
  • Dock charges your iPod while playing music
  • Compatible with iPod shuffle and other audio devices with audio selector
  • Integrated bath tissue holder can be folded as stereo dock
  • Requires AC power (AC adapter included)
  • Easy to remove from wall mount
  • Two tweeters for highs
  • Two woofers for lows

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 25 Jan 2008

Although I’m obviously several days too late to participate in the whole “Blog for Choice” party — not really due to lack of interest but more because I really felt like I had nothing to add — I couldn’t pass up the opportunity to pass along one link, compliments of baby_balrog on MetaFilter:

Is Abortion Murder” by Graham Spurgeon.

I find it interesting because it’s exactly the sort of argument I’d never really try, or be able to, make. Social-utility arguments? Sure. Legal arguments? Sure. Rights-based arguments? Definitely. But religion-based arguments? I wouldn’t know where to start.

And that, I think, is part of the problem. While listening to a recent debate between the president of the National Organization for Women (NOW), and some Washington flack for NRLC, it became apparent to me that each group was speaking its own language. There wasn’t even the semblance of discussion, and certainly no possibility of winning anyone over who wasn’t already convinced, because each was speaking in the language that their supporters know and understand.

When someone from Planned Parenthood, NOW, or NARAL speaks, it’s generally a pretty safe bet that they’re going to emphasize the right of an individual to control their own body, and perhaps the personal and social cost of unwanted pregnancies and children. When a pro-life advocate speaks, it’s almost always about “babies.” Occasionally there’ll be hints made at promiscuous sex and slut-punishing, but usually the emphasis is on those “unborn children” and the inherent value of potential human life.

Spurgeon’s essay bridges this gap a little. It’s a pro-choice argument, but written entirely in Biblical terms. While I can’t comment or critique his scriptural references, it’s at least a different approach.

0 Comments, 0 Trackbacks

[/politics] permalink

PollingReport.com has a nice selection of national opinion polls on the Democratic race for the Presidential nomination. Most of them show Clinton over Obama, about 40% to 30%, with Edwards a distant third with ~10% and then minor candidates and ‘unsure’ making up the remainder.

Obama does seem to be closing the gap, though I’m not sure it’ll be enough to actually bring in a win. The AP shows him increasing his lead almost 10 points (from 23% on 12/5/07 to 33% on 1/17/08) over the holidays, within reach of the front-runner.

The really odd poll in the bunch is one conducted by “Financial Dynamics” on Jan 10-12, which showed Clinton at 38% and Obama at 35%; essentially equal when uncertainty is taken into account. While it’s hard to be sure, the difference between these results and the AP / USA Today polls seems to be that it didn’t allow ‘Unsure’ as a choice; it forced respondents to pick one or the other. I think Clinton benefits from name-recognition here, but that doesn’t necessarily translate into votes, since many ‘unsure’ voters may not bother to vote in the primary anyway.

If slick Flash applets are more your cup of tea, USA Today has a neat Presidential nomination poll tracker (requires JavaScript and Flash). Its ‘poll of polls’ puts Obama strongly in the lead in South Carolina, still behind in Florida, approaching parity in California, and still significantly behind in New Jersey and New York (but with an upwards trend).

There seems to be a lot of speculation going around that the current focus on the economy will hurt Obama and help Clinton, but so far the polls don’t seem to be reflecting that. If he wins in South Carolina, as seems likely, Clinton may find it very difficult to maintain her national lead going into the remaining primaries and Super Tuesday.

0 Comments, 0 Trackbacks

[/politics] permalink

Mon, 21 Jan 2008

In the wake of McCain’s victory over rivals Romney and Huckabee in South Carolina, there’s been no shortage of analysis. Some of the best, in my opinion, has come from the Washington Post’s “The Trail” campaign blog. Although nothing is certain, it’s looking more and more like he’s the only viable Republican candidate, and the general election will be either Clinton or Obama vs McCain.

Although South Carolina contains just as many evangelical Christians as Iowa (about 60% according to WiPo), far fewer of them were interested in drinking the Huckabee Kool-Aid this time around. Whether this is because of differences in campaign strategy — Huckabee had far longer to spend in Iowa, for starters — or in changing perceptions of his viability isn’t certain. But it can’t be good for him, and it can’t be anything but good for McCain.

Really, though, the McCain/Huckabee race isn’t anything new. It’s essentially the same internecine fight between the old guard and the newer, ‘faith-based’ Right, just as McCain/Bush was in 2000. Except that while Bush was moderate enough (in Republican terms) to capture both Evangelicals and traditional conservatives, Huckabee is proving too frightening, too populist, and overall too nonsecular to do the same. McCain’s decisive win in S.C., the state where his 2000 campaign finally stalled, should be indication enough to the Huckabee camp that they can’t follow the Bush plan to victory.

Although it’s too much to expect the Huckabee camp to just give up and go home quietly, the S.C. primary would seem to move the focus over to McCain vs Romney. Unlike the case with Huckabee, where each represents a distinct faction within the Republican party, the battle lines here are more fluid. Romney purports to be the last of a dying breed: a ‘Yankee Republican,’ fiscally conservative and comparatively socially liberal. McCain, on the other hand, has spent years cultivating his image as a ‘maverick.’ Both are self-described moderates, and both would court the same independent and swing-vote bloc in a general election.

Ultimately I think Romney’s Mormonism and accusations of being a ‘crypto-liberal’ hurt him more than McCain’s ‘professional politician’ background can in reverse (clumsy attempts at swift boating nonwithstanding), and ultimately Romney will be viewed as too controversial to even be left with a VP slot.

But time will tell, and there’s not that long left to wait.

0 Comments, 0 Trackbacks

[/politics] permalink

Sun, 20 Jan 2008

Tree growing in rotting school books I think this image really speaks for itself; there’s not much that I can say to add to it. The photographer, username “Sweet Juniper” on Flickr, discusses it on their blog. Be sure to view it at a large size in order to appreciate it properly.

Although I’ve done my share of photographs in abandoned buildings, most of the places I’ve been were your pretty standard post-industrial, “the world has moved on” landscape. They really have nothing on these locations, which are positively apocalyptic.

As a quasi-counterpoint — lest you start to draw overbroad conclusions about the city as a whole — the “Detroit is Beautiful” set, by the same photographer, is also worth a look.

(Via Reddit; also spotted on BoingBoing. Image is CC-BY-NC-ND 2.0.)

0 Comments, 0 Trackbacks

[/other] permalink

Until earlier today I’d never heard of 1MDC, but after running across it in a discussion on Liberty Dollars, I got curious. The story of 1MDC is a strange one, shrouded in more than a little mystery. However the more I’ve learned about it, the more interesting it gets.

1MDC is, or rather was, a digital currency, providing a service not dissimilar in general concept to PayPal, but using gold as a medium of exchange rather than traditional national currencies. This in itself isn’t unique — E-Gold Ltd. is perhaps best known for it — but 1MDC approached the problem slightly differently.

While E-Gold Ltd. and its competitor services, including GoldMoney and Pecunix, have actual gold reserves stored in vaults, 1MDC functioned as something of a ‘meta currency.’ Its ‘gold’ reserves were maintained in the form of balances in accounts with other gold-backed digital currencies (principally E-Gold).

This in itself is fairly interesting, because it’s such a departure from the business model shared by the other digital electronic currencies. In a way, 1MDC represented a ‘second generation’ digital currency, relying completely on ‘first generation’ currencies for its solvency.

1MDC offered its depositors several advantages over using E-Gold directly, the principal one being lower fees. While E-Gold charges an account maintenance fee of up to 1% per year with a maximum of 0.05g per account, 1MDC charged nothing. This was possible because 1MDC pooled users’ assets into a handful of E-Gold accounts, paying only one maintenance fee per account. 1MDC covered its extremely low overhead (relative to E-Gold’s) and apparently made a profit by charging its highest-volume customers — those with more than 100 transactions per month — a per-transaction fee, and levying a 5% charge on deposits back out to other digital currencies.

As innovative as 1MDC may have been, its days were numbered. By using other digital currencies as its reserve, it put itself in the precarious position of becoming instantly insolvent if something happened to those accounts. In mid-2005, something did, in the form of an investigation by the U.S. Department of Justice into gold-backed electronic currencies. Although the investigation warrants a discussion in itself — and it has produced two Wired articles (“E-Gold Gets Tough On Crime” and “E-Gold Founder Calls Indictment a Farce”) to date — the death knoll for 1MDC was when its E-Gold reserve accounts were ordered frozen. Overnight, it practically ceased to exist.

It’s not entirely clear who 1MDC’s primary users were. The DoJ would have us believe that gold-backed digital currencies in general appeal to terrorists, pedophiles, pornographers, and drug smugglers — the “four horsemen of the Infocalypse,” as Cory Doctorow once called them — but the reality is murky. Judging by the places information on 1MDC is found, it’s fairly obvious that pyramid and other ‘make money fast’ schemes may have been involved. But many have been quick to point out that the U.S. Government has a certain amount of self-interest in eliminating any and all competition to the USD, particularly currencies that defy long-established conventions.

Although 1MDC itself is dead, the concept and business model itself is too simple to ever really destroy. It wouldn’t surprise me at all if right now there’s an underground version of 1MDC in existence, perhaps with reserves a little more wisely chosen.

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 18 Jan 2008

As has been widely reported and discussed by now, AOL seems poised to switch its IM network from the proprietary OSCAR protocol to the open XMPP. The biggest piece of evidence is that they are running a test server, xmpp.oscar.aol.com, which is accepting XMPP connections and allows users to log in using their AIM ID.

If they move forward with XMPP, it would be a major step forward for both interoperability and open standards. The amount of time and effort which has been wasted as a result of the IM networks’ use of proprietary protocols is simply staggering. Were it not for mutual incompatibility, all the effort directed at making third-party clients like Adium and Pidgin work with various and sundry protocols could have been spent actually making them into better communications tools from the user’s perspective.

There are still a few steps which need to happen before AOL’s XMPP effort can be considered useful. First, they need to connect it to the rest of the AIM servers, so that users connecting through XMPP can connect to users connected using legacy clients via OSCAR. Second, they need to enable XMPP server-to-server connections, so that users can talk with other networks. Once that happens, it’ll be curtains for OSCAR. (Not immediately, of course — there are lots of people out there still using old client programs and presumably happy with them, but when they eventually update it’ll be to XMPP.)

0 Comments, 0 Trackbacks

[/technology] permalink

Mon, 07 Jan 2008

Globalsecurity.org has a nice timeline of news coverage related to the Israeli airstrike on an alleged Syrian nuclear facility on September 6, 2007. The strike is interesting because of the ‘deafing silence’ and lack of mainstream news coverage that originally surrounded it, although based on the available evidence it may well be looked back on as a defining moment in Middle East geopolitics.

For those who haven’t been keeping track, it appears as though Israeli F-16s, acting with U.S. approval and flying through Turkish airspace (presumably without approval), bombed a facility in Syria which the U.S. and Israel believed contained nuclear-weapons materials. The bombing itself may have been preceded by a covert ground raid to recover evidence sufficient to convince the U.S. of the threat. The alleged nuclear materials, and potentially some personnel killed on the ground, may have come from North Korea, which was one of the few states (besides Syria itself) to vociferously protest.

News coverage has been spotty at best, and it’s only recently that the pieces have been assembled into something approaching a clear picture of what might have occurred. The Globalsecurity.org timeline is interesting because it is essentially meta-analysis of the news coverage, and provides insight not only into the event itself but into the way the event was covered by the press.

On the whole it seems a bit early to tell whether the Spectator’s coverage—which sparked a number of discussions about the increasing use of the phrase “World War Three”—was overwrought or prescient.

0 Comments, 0 Trackbacks

[/politics] permalink

Sun, 06 Jan 2008

It’s been a while since I’ve written anything, mostly because I’ve been too busy reading. One book that I just finished and I’ve decided is worth special mention is Marc Levinson’s The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger. (Also available from Amazon.) The book deftly covers much of the economic, political, social, and technological evolution of the now-ubiquitious 40-foot “box,” and gives some fascinating insights into our modern, globalized society in the process.

If you are in the least bit a transportation geek, or if you have any level of curiosity about how the products you use every day get from the other side of the world to your door (and how the system that accomplishes this came to be), I highly recommend it.

0 Comments, 0 Trackbacks

[/technology/transportation] permalink