Kadin2048's Weblog
2017
Months
JulAug Sep
Oct Nov Dec

RSS

Tue, 20 Oct 2009

I was flipping though the channels on TV earlier and came across a new addition to the local lineup — something called The Research Channel. Apparently it broadcasts recordings of presentations by various notable people on a variety of subjects. The recording that caught my eye was Behind the Code with Jim Gray. Gray, at the time of the interview (2005) of Microsoft Research but formerly of IBM, Tandem, and DEC, had some interesting comments about databases, parallel processing, and the future of hardware.

At one point (about two thirds of the way through the video), he describes future processors as probably being “smoking, hairy golfballs.” The ‘smoking’ part is because they’ll be hot, consuming and dissipating large amounts of power in order to run at high clock speeds; hairy because they’ll need as many I/O pins as possible, on all sides; golfballs, because that’s about the maximum size you can achieve before, at very fast clock speeds, you start to run into the “event horizon” (in his words) of the speed of light and lose the ability to propagate information from one side of the processor to the other in one clock cycle.

He didn’t give a timeline on this prediction so I’m not sure it’s fair to call it either correct or incorrect just yet, but it’s interesting. The ‘smoking’ part actually seems to have gone in the opposite direction since 2005; power dissipation has gone down from the highs of the Pentium IV and IBM G5, but it’s possible it could creep back up again if something stops the current trend. He seems to have been right, at least in a limited sense, about ‘hairy’: a look at new processor sockets shows a definite upward trend, with Intel’s newest at more than 1500 pins — common sockets in 2005 would have had less than half that. They’re still all on the bottom of the package, though. The ‘golf ball’ maximum on size is more theoretical, but I don’t think anything has happened recently that provides cause to dismiss it.

After watching the segment, I pulled up the Wikipedia page on Gray, curious to see what he was up to today. Unfortunately, it was at that point when I remembered why his name seemed so familiar: he disappeared while solo sailing off the coast near San Francisco, and despite a massive crowdsourced search effort, he was never found. An sad and unfortunate end for a very interesting guy.

Related:

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 18 Oct 2009

I really like Yelp, which is probably why I’ve bothered to spend time typing up reviews for it, despite it being a commercial service that could theoretically pull a CDDB at any time. I’ve found a lot of neat little restaurants that I wouldn’t have otherwise found, particularly while traveling, via Yelp, and in general have found the ratings and reviews there to be of very high quality.

However, I’ve noticed that as Yelp’s userbase has grown and expanded beyond the computer-savvy foodie demographic that seemed to have been some of its first users, the average ratings for a particular business are no longer as useful as they once were. It used to be, if a restaurant had five stars and more than a handful of ratings, it was almost certainly phenomenal. Similarly, if a place was languishing at one or two stars, it was probably best avoided — after all, if a place is bad enough to actually get someone (who isn’t being paid) to spend the time to write a negative review, something must be pretty wrong. And if something was in the middle, chances are it was pretty much just average for whatever cuisine it was trying to represent.

Lately, though, I’ve noticed that many places — and this is especially true of eclectic or “acquired taste” restaurants — are getting pushed towards middling reviews not because anyone is actually rating them that way, but because very good and very bad reviews are being averaged out into two or three stars. This isn’t really surprising: reviewing restaurants is a “matter of taste” practically by definition. But that doesn’t make the result very useful. When I’m looking down the search results in Yelp, I want to know what I am likely to enjoy, not what some hypothetical “average user” is going to like. (I’m not the first to notice this problem, either.)

As more and more users join Yelp and start writing reviews, the average review will naturally start to approach what you’d get from reading the AAA guide, or any other travel or dining guide aimed at a general audience. That’s not necessarily bad, and when you’re writing a travel book or dining guide it’s pretty much exactly what you want: try to give an idea of what most people will think of a particular restaurant.

But that’s certainly not the best that an interactive system can do, not by a long shot. The benefit of a website, as opposed to a book, is that the website doesn’t necessarily have to show the same exact thing to everyone. This is why the front page of Netflix is more useful than the top-ten display down at your local Blockbuster, or why Amazon’s recommendations are typically more interesting than whatever happens to be on the aisle-end display at Borders. It’s not that Blockbuster or Borders aren’t trying — they’re doing the best they can to please everyone. The beauty of a dynamic website is that you don’t have to try to please everyone with the same content; you can produce the content in a way that’s maximally useful to each user.

If Yelp took this approach, ratings from users who tend to like the same things that I do would be weighted more heavily when computing an establishment’s overall score; if you brough up the same restaurant (or if it came up in your search results, more importantly), it might have a different score, if your preferences — as expressed via your reviews — are significantly different than mine. This makes perfect sense, and provided that there’s still some way to see what the overall, unweighted average review was (Netflix shows it in small print below the weighted-average), it’s a no-lose situation from the user’s perspective.

I’m sure that Yelp’s engineers are aware of the Netflix model and how it could be applied to Yelp, so this isn’t a suggestion so much as a hope that it’ll get implemented someday.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Wed, 07 Oct 2009

Earlier today I read a blog entry by Ben Metcalfe that really hit home. The entry is called “My GMail password scares me with its power,” and I’d like to say that he’s not the only one. Particularly in light of the widespread (and apparently quite successful) phishing attacks going around, it’s a good idea to think about how much of your life and personal information are stored behind that one password, and whether that password is really up to snuff.

Metcalf puts forward what I think is a very modest proposal, which I think boils down to two main points. Neither are trivial, but neither are either one a real stretch on technical grounds:

  1. Google ought to allow you to enforce some sort of privilege separation: rather than just having one password for everything, more sensitive services (GMail, Google Checkout, Search History) should be able to be configured to use a separate password. This would ensure that the cached password saved in the chat program you use at work couldn’t be used to log into your mail, or make purchases to the credit card associated with your Google Checkout account.

  2. Users who are security-conscious could buy a two-factor authentication token, like an RSA SecureID, to use with some or all Google services. This wouldn’t be mandatory and it wouldn’t be free — so it wouldn’t help the clueless or the broke — but it would let those people who are honestly concerned about security but who lack the ability to replicate Google’s services themselves (and, lets face it, just about nobody can replicate Google’s services at this point) to get that security on top of Google’s offerings.

Perhaps neither are economically feasible right now; too few users may care about security—and be willing to pay for it—to cover the cost that either would mean to Google to implement. But as users put more and more of their data in the hands of managed services like Google’s, and security breaches start having more serious consequences, the demand will come.

In the meantime, what’s a concerned user to do? The best thing you can do is to choose a more secure password. If you don’t mind potentially creating something that you can’t memorize, use a random-password generator and either write the results down, or store it in a ‘password keeper’ program that encrypts its data file with one (good!) master password. I take this latter approach, and use the open-source Password Safe on Windows and Linux, and Password Gorilla (which opens Password Safe database files) on Mac OS X. And, of course, take all the usual precautions against potential phishing attacks.

Until Google sees fit to improve on the one-username/one-password architecture for all its services, that’s about the best you can do.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Fri, 11 Sep 2009

Earlier today my copy of Quicken 2006 for Mac began refusing to download transaction activity from any of my bank or credit card accounts, complaining about an “OL-249” error. It took a bit of Googling to figure out what was going on, so I thought I’d post the solution here.

Short version: you need to download this fairly obscure patch from the Quicken website and install it. You should do this after updating via the regular File/Check for Updates option, and it is in addition to the updates provided via that route.

Longer explanation: from what I can tell, the certificates included with Quicken 2005, 2006 (which I use), and 2007 had relatively short expiration dates. They expired, and for some reason either weren’t or couldn’t be updated via the built-in update mechanism. Hence the additional patch. Why they couldn’t do this via a regular update push I’m not sure, but at least they made them available somehow — I would have half expected them to just tell everyone to upgrade.

Once I ran the installer against Quicken.app, online transaction downloading worked fine once again.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Wed, 01 Jul 2009

There’s been a bit of discussion recently over the idea of mileage-based road taxes replacing the Federal gasoline and diesel taxes that currently pay for the Interstate system, among other things. Most articles seem to have been prompted by a report from the “National Surface Transportation Infrastructure Financing Commission” (which somewhat strangely has a photo of the DC Metro in an underground station on its homepage) suggesting that the gas tax be phased out by 2020 and replaced by a mileage-based tax.

The proposal by the NSTIFC called for a GPS-based system to track road usage and upload it on a monthly basis for taxation purposes. This is stupid. It’s overly complex, it would be ridiculously expensive, it has major privacy concerns, its operation would be opaque to users, and it would almost certainly be open to abuse due to its complexity. It’s a terrible idea and the people suggesting it should be forced to read the RFPs of every overly-complex public sector IT project that has fallen flat on its face for similar reasons, until they repent for coming up with such a terrible idea.

However, the stupidity of that particular implementation plan doesn’t mean that all mileage-based taxes are a bad idea. The underlying concept is a sound one, and if it’s done right it might cause people to think harder about the services they’re using and how much it costs to maintain them. That’s a Good Thing in my book.

The kind of mileage-based tax I’d support would be a low-tech one. Calculate taxable mileage using annual odometer readings, conducted while vehicles are undergoing normal safety or emissions inspections. (There are states which currently don’t do emissions or safety inspections that would have to start doing them, but this change is far less than what would be required for alternative schemes, e.g. the GPS-based one.) Certainly it’s possible to roll back an odometer or bribe an inspector, but those things are already illegal, as are other kinds of tax fraud. Increase the penalties proportional to the increased incentive to commit fraud, and we shouldn’t have much more trouble with odometer tampering than we currently do.

Basing the taxable mileage from the odometer reading doesn’t require invasive GPS tracking devices, which would doubtless be used for purposes well beyond tracking taxable mileage once installed. It doesn’t require any new technology, and in many places it makes use of the already-extant inspection infrastructure. It’s cheap — both from the user’s and the government’s perspective — and it would work.

Two of the most frequently-cited concerns regarding mileage-based taxes concern drivers who frequently travel outside the US, and drivers who spend most of their time on private roads (e.g. farm vehicles). The second issue — vehicles on private roads — is easy to address: if your vehicle doesn’t have a license plate and doesn’t normally operate on public roads (vehicles which today use untaxed off-road fuel), it doesn’t get taxed. If your vehicle does have a plate, it does. In the very worst case, this might force a very small number of edge-case users to get a second vehicle, if they currently have one that sometimes operates on-road using taxed fuel and sometimes off-road using untaxed gas, but this is such a small percentage of vehicles that I’m not sure it bears building policy around.

Addressing international driving is a more interesting question. The simplest, lowest-tech solution is probably to simply record mileage as part of the border-crossing process. If drivers who are crossing the border want a tax exemption on their non-US mileage, they could carry a logbook similar to a passport, specific to their vehicle, and have the mileage noted and certified by a Border Patrol agent as they crossed out of and back into the US. It would be up to drivers to determine whether, based on the amount of mileage they actually drive outside of the US, the paperwork was worth it.

What gets lost in the discussion of mileage based taxes, and which I think bears attention, is that in any reasonably fair scenario, the taxes on passenger cars and light trucks should be vanishingly low. The bulk of mileage taxes should be placed on commercial vehicles weighing more than 6000 lbs., because they actually cause wear and tear to the roads. Passenger cars essentially don’t. Whenever you see an Interstate being repaved, it’s generally either due to weather deterioration, or wear and tear by trucks. The weather-repair costs should be borne by all drivers essentially in proportion to the amount they drive, but the wear and tear expenses should be squarely on heavy vehicles. In fact, the easiest way to ensure vehicles pay for the damage they do is to base the tax on the milage driven multiplied by the maximum axial weight of the vehicle. (Road wear is essentially proportional to the load on each axle, although I suspect the relationship is strongly nonlinear and some research might be required to determine the actual rate tables.)

And that brings me around to my only real objection to a mileage-based tax, which is also my objection to virtually all taxes except those placed on real property: the public needs an assurance from the government that the mileage tax would only be used for maintenance and construction of the transportation infrastructure, and not for whatever purpose Congress decides is politically expedient this season. This is because, when you start taxing a particular activity, you start to change the underlying incentive structure that drives people’s choices and lives. It is important to make the ‘retail’ cost (that is, the out-of-pocket cost paid by the consumer) of goods reflect the true cost to society of that good, but it shouldn’t be made any higher.

Federal road taxes should be used for the maintenance of the Federal road and highway system only — not for regional light rail projects (better funded by property taxes on those areas that will benefit) or for environmental remediation of fossil fuels (better funded by taxes on the fossil fuels themselves). And certainly not for schools, hospitals, police stations, or anything else, except insofar as the need for those things can be directly attributed to the existence of the Federally-maintained road network.

Some have objected to the idea of a mileage-based tax because under most proposals, it would not immediately replace the gas tax — that is, the gas tax would not drop to zero cents per gallon on the day a mileage-based tax went into effect. If both the mileage-based road usage tax and the gas tax were set properly, this would not be a problem. The mileage based tax would go towards infrastructure maintenance, and the gas tax would go towards remediating the environmental consequences and other negative externalities of petroleum use. Since there are a lot of negatives associated with burning oil, it should have a fairly high tax regardless of what we decide to levy for road use. Furthermore, the remaining gas tax should apply across the board to all petroleum products intended for combustion, not just road fuels: this means oil used in power generation, on farms, or by railroads shouldn’t be exempt. If you burn it and vent the byproducts into the atmosphere, it should be taxed: it’s not a “road usage” tax anymore, it’s a “petroleum combustion” one. (Here’s where you build your CO2 or climate-change taxes, incidentally.)

Retaining — even increasing, if valid reasons exist — the tax on gasoline to cover its negative externalities also eliminates one other problem with a mileage-based tax: that it creates a perverse incentive to continue using petroleum vehicles and not switch to alternative fuels, which are cleaner and have fewer negative externalities associated with their use. Plus, as a bonus, if we institute a mileage-based tax with a weight component, we can stop punitively taxing diesel fuel as a backdoor way of taxing trucks for the damage they do to the roads. Diesels are more efficient and are favored in other parts of the world (where the tax regimes are less punitive) due to their inherent economy.

There are lots of reasons to hate the mileage-based taxation proposals that have been put forward, and would require GPS receivers and constant monitoring of every car on the road. However, there’s no reason to dismiss the idea of mileage-based taxes out of hand. Taxing based on services actually consumed is always a good idea in my book, and if it were done right, a mileage-based tax could help shape our actions in ways that avoid externalizing costs on others. However, I remain as cynical as ever about Washington’s ability to get this, or just about anything else, right.

0 Comments, 0 Trackbacks

[/politics] permalink

Fri, 12 Jun 2009

I discovered earlier today, while trying to load my personal X.509/SSL certificates onto my trusty Nokia E61i, that its personal-certificate support is for all intents and purposes intentionally broken.

When the user certificate is imported to Nokia Eseries devices, e.g to be used for authentication with WLAN connections, the certificate contents are checked and if there’s any issues with the certificate fields, the importing of the certificate will fail and the error message “Private key corrupted” is shown.

One common situation where this problem may occur is if the KeyUsage field in the certificate has the nonRepudiation bit enabled. A certificate with nonRepudiation bit is rejected by E60, E61, E61i, E65 and E70 devices because of the security reasons.

The workaround is to create a new user certificate where the nonRepudation bit is removed. The nonRepudiation bit is not necessary when doing a certificate based authentication e.g. in WLAN environments.

Yes, broken. I don’t care what the Nokia engineers were thinking when they put that “feature” in, it sucks. It basically stops you from using 99.9% of all certificates in the world — ones produced using the default settings from most issuers, and forces you to generate a brand new certificate for the device. That is totally unacceptable. Certificates cost money in many cases, and even if they don’t, they take time to create. Plus, managing multiple per-device (rather than per-user) certificates is a royal pain in the ass.

Nokia’s “solution” to the problem would force me to generate a brand-new certificate for my mobile, and then I’d need to replace the certificates stored on every other device with the new one, in order to make sure I can decrypt S/MIME email. If I didn’t do this — if I used one certificate on the E61 and another on my desktop and laptop, the E61 wouldn’t be able to open encrypted email sent in response to messages originating from the other machines.

(This is all assuming the E61i can even do S/MIME, which I’m not 100% sure of; but since it can’t load my certificate, it’s a bit of a moot point.)

Hardcore failure on Nokia’s part. Security, no matter how well-meaning, is worse than useless if it breaks functionality or makes the user’s life this difficult. All it does is raise the ‘cost’ of security, and make it more tempting to forgo things like certificate-based authentication at all.

Up until recently I’ve been pretty happy with the E61i, but I’m feeling more and more that they just didn’t take core functionality seriously enough. The software is flaky and unreliable, as is the Bluetooth stack (I get lockups about once a week when I’m using it with a BT headset or tethered to a laptop), and I question whether anyone who worked on the built-in email client actually used it. (When you have an entire QWERTY keyboard to work with, why does every action require at least 3 clicks on a miniature D-pad?) It does have its charms — JoikuSpot is amazingly useful, and I love not being locked into an iPhone-style “App Store” — but the warm fuzzies are really wearing off.

It’s starting to become clear to me why BlackBerry, and not Nokia, is so dominant among users who actually care about communication over everything else. (BlackBerry offers both S/MIME and PGP, although it seems like it may need to be deployed to an Enterprise Server rather than to individual handhelds.) It’s just unfortunate that the BlackBerry offloads so much intelligence to the BES/BIS backend; I’m not really comfortable being that tied-in to somebody else’s infrastructure, and I don’t really feel like running my own BES.

Maybe it’s time to take a look at Palm.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

Mon, 08 Jun 2009

Calculated Risk, one of my favorite finance and economics blogs, has a great article written by the late Tanta on The Psychology of Short Sales. The piece really hit home for me, because during my recent search for new quarters, I ended up drooling over a lot of short sale listings, only to be warned by my agent that they often take a very long time to execute and frequently fall through. I quickly cooled to the concept.

On paper, short sales are the ultimate win/win for an upside-down homeowner who wants to “walk away,” and a lender who wants to minimize their loss. It lets both parties avoid foreclosure, prevents a house from sitting empty and potentially becoming a target for vandalism, squatters, and generally a source of neighborhood blight, and lets the homeowner remain on the property and leave gracefully when it sells. Plus, potential buyers get a house that hasn’t been trashed by a bitter ex-owner, or had its pipes freeze and burst due to over-winter neglect. Triple win, right?

That’s on paper. In practice, things often don’t work out so well. Because of the way short sales work, there’s often a disconnect between what the various parties involved in the deal think the property is worth. If they can’t reconcile their views, there’s no sale and the property goes back on the market, and eventually to foreclosure.

The biggest difference between a short sale and a traditional bank-owned post-foreclosure property (a “REO”, or “real estate owned”) is that in the latter case, the bank has already taken possession of the property, probably had it assessed, and accepted that they’re going to take a non-negligible loss. It’s just a non-performing asset sitting on their books at this point, one that they’d presumably like to unload at the earliest possible opportunity at an acceptable price. Contrast this to a short sale: the bank has just learned that the current homeowner can’t make their payments and wants out, and has responded by telling them to get a listing agent and put it on the market. They haven’t really written anything down yet. The big loss is still to come.

To a buyer, a short sale property ought to be more attractive than a REO, because it hasn’t been sitting vacant or gotten trashed during a ugly eviction. However, buyers quickly learn to beware these six words in any listing: “offers subject to third-party approval.”

When a buyer makes an offer on a REO, the offer goes to the bank and they get a pretty straightforward thumbs-up or thumbs-down. Either the offer is acceptable and it sells, or it isn’t and the bank is content to let it stay on the market a bit longer. Since the bank already owns the house, they just want to get the most for it they can.

When an offer is made on a short-sale property, it gets forwarded by the listing agent to the bank, who has the choice of whether to accept it or not. If they accept it, they’re almost certainly taking a loss and accepting a writedown on the original mortgage. There is a big psychological difference between this and the REO case, it seems to me: in a REO situation, the bank is trying to recoup as much as it can of an already-realized loss; in a short-sale, the bank is actually taking the loss as part of accepting the offer.

This psychological difference seems to manifest itself in the relative speed with which banks process the two different types of offers. REO offers get decisions rendered quickly; short sale offers can take months to process, during which both the buyer and seller live in uncertainty. This uncertainty causes buyers to make fewer offers on short sales than on REOs, and to offer less for short sales than they might otherwise. In theory there’s no reason why short sales should sell for much below what a regular owner/buyer sale would, but in practice they go for something closer to REO prices. This difference is, to my eyes anyway, almost completely due to the perceived arduousness of the short sale process.

In addition, there’s often a failure on the part of buyers and lenders to understand how the short sale benefits the other party, and how this affects the price they’re willing to accept. This is what Tanta explores in the Calculated Risk article. Lenders are only interested in a short sale if it results in a price that’s significantly (more than 40%) greater than what the property would fetch as a REO, post-foreclosure. Buyers, on the other hand, often try to bid less than what the property would fetch as a REO, on the assumption that the lender ought to be willing to take a little less on a short sale than they would as a REO, since they’re avoiding going through foreclosure. Hence, no deal.

In order to make short sales a more viable option for distressed homeowners who find themselves upside-down on their mortgages and unable to pay for them (or who simply want out and can’t sell normally and cover the mortgage), I can think of several things that need to happen:

  • Banks and other lenders need to assign more staff to “special assets” and other pre-foreclosure divisions, and realize that they can avoid needless trouble and expense by going the short-sale route versus foreclosure. They need to gear these divisions to providing fast up-or-down decisions on short sale offers, and empower employees to write down assets significantly (at least as much as the delta between the REO price and the loan face value) in order to make deals happen quickly.

  • Prospective buyers need to be better-educated about how short sales work, not only from their own perspective, but also from the owner’s and lender’s. They need to understand why a lowball, sub-REO offer isn’t going to fly with the lender. For a short sale to work, three parties — the owner, the buyer, and the lender — need to feel like they’re making out better than they would have via foreclosure. Offering substantially less than a property would fetch as a REO doesn’t allow that to happen.

  • Homeowners considering a short-sale, whether in financial distress or not, need to be better about selling their properties. They need to work hard to make it clear to buyers that they’re not selling a REO, and that the property is inhabited and well-maintained. If a house looks like a foreclosure property, it’s going to get offers that reflect that, and it will almost certainly end up as a foreclosure property eventually. I saw several short-sale properties during my recent search that were frankly worse than the average REO, and that just isn’t going to work.

As it turned out, I didn’t make any offers on the short sale properties that I looked at. Given the time available before I have to be out of my current rental, it just doesn’t make sense. And I definitely wasn’t alone: many short sale properties had been on the market for hundreds of days, while REOs are being snapped up almost daily by hungry buyers armed with low rate pre-approval letters.

Making the reality of short sales better match the concept would provide affordable homeownership to many buyers, a dignified ‘out’ for distressed owners, and smaller losses to lenders and their investors. But a lot has to happen before that will be the case.

0 Comments, 0 Trackbacks

[/finance] permalink

According to a post on the Full Disclosure mailing list, history has repeated itself: T-Mobile’s systems have apparently suffered a serious breach, and a lot of customer data has been compromised. Oops.

The last time something like this happened to T-Mobile, it was due to a known vulnerability in BEA’s WebLogic application server that T-Mobile had failed to patch correctly. Although the ‘hacker’ in question ended up in the Federal pen for his trouble (one hopes the celebrity email-reading was worth it), and a lot of attention seems to have been paid to the Secret Service’s identification and capture of him, the real culpability was T-Mobile’s. By failing to patch their servers, they left them wide open to infiltration.

It’s a bit too soon to tell whether the latest break-in was similarly due to technical incompetence at T-Mobile, or if they fell victim to some other method. However, it doesn’t sound like the ‘cybercriminals’ behind it all are the sharpest pieces of cutlery in the drawer. Unless they’re playing an amazingly deep game, I think it’s safe to say they didn’t think their cunning plan all the way through.

From the FD post:

We already contacted with [T-Mobile’s] competitors and they didn’t show interest in buying their data -probably because the mails got to the wrong people- so now we are offering them for the highest bidder.

Sounds almost petulant, doesn’t it? “Probably because the mails got to the wrong people” — really? They seriously think that’s the problem? If only they’d had the contact information for the Espionage Division of AT&T, the whole thing would have gone so smoothly!

They would have done better to read up on the Coke / Pepsi trade-secrets bust from back in 2006. A disgruntled Coke employee stole the secret Coke formula and tried to sell it to Pepsi, but Pepsi — much to her surprise, I’m sure — pretty much fell over itself notifying Coke of the offer, and then worked with the Feds during the ensuing investigation. Although the press coverage tried to make a heart-warming after school special out of the whole thing, Pepsi’s behavior should have been predictable and obvious: the risk of getting caught with stolen trade secrets from their fiercest competitor so greatly outweighed the value of those secrets, there was no way they would ever take the thief up on her offer.

The very same situation now exists for the morons who stole the data from T-Mobile. What competitor would even think of touching it? What could any competitor possibly gain from the data that would be greater than the huge downside risk, and could not be obtained more easily some other way? I can’t think of anything. Even if the files were totally complete, and represented dossiers on every one of T-Mobile’s customers completely documenting their behavior and preferences with regards to cellular telephony, it still wouldn’t be worth the near-certain chance of getting busted down the road, when T-Mobile notices a startlingly high number of their subscribers getting poached.

The smartest thing for AT&T, Verizon, Sprint, et al to do, on receiving an offer to purchase obviously stolen records, would have been to immediately report it to the Feds. That they didn’t makes me guess that they probably didn’t even take the offer seriously. How humiliating!

After failing to sell the goods (which I suspect are database dumps) to T-Mobile’s competitors, the thief or thieves then decided to just post a for-sale ad to the Full Disclosure mailinglist — well known in IT security circles, but not known as a clearinghouse for stolen identities. It’s not as though there aren’t venues on the ‘net where trading and selling identity information is common — supposedly there are whole online communities for this purpose — but the FD list certainly isn’t one of them. The only way they could have been more bush league is if they’d used Craigslist. Or maybe Ebay.

So that brings me to two possible conclusions about the whole breach:

  1. It was conducted by someone of questionable technical competence (at this point it’s too early to tell), but dreadful business skills, who couldn’t resist undermining the commercial value of the information they stole in order to claim credit in a high-profile way. They chose the FD list rather than some more appropriate sales channel because the FD list gets read by a fair number of security experts, and this means more geek cred. Of course, what ‘geek cred’ gets you in prison is beyond me. (Maybe Hans Reiser knows.)

  2. It was conducted by someone who knows exactly what they’re doing, and what we’re seeing is a carefully-constructed ruse of some sort. There might not be any information to sell, or else selling the information at a maximum profit might not be the real goal. Instead, the purpose would be to embarass and tarnish T-Mobile’s reputation as badly as possible.

Admittedly, option #2 does get a little tinfoil-hatty. To profit from it, someone would need to build up a huge short position in DT stock (or certain futures contracts), and hope that the news of the breach would cause the value to slide. However, I don’t even know if this would be realistic: DT is a huge company, and T-Mobile USA is only a part of it. Even a cataclysmic security breach might not do more than wiggle the needle of DT’s share price, requiring a huge position or lots of leverage to take advantage of it.

Neither theory bodes well for the people behind it; according to option #1 they’re clearly incompetent and far beyond their depth as far as professional criminality is concerned. Not exactly a hard target for law enforcement. According to option #2 they’re less stupid, but still at a huge disadvantage: the position they’d need to have built up to profit from the security breach would be visible in retrospect, possibly even obvious. Even spread between lots of accounts, I suspect it wouldn’t take long for the forensic accountants to catch on.

It will be interesting to see how everything pans out in the next few weeks. There is no scenario where it looks good for T-Mobile; those who, like me, are T-Mobile customers can only hope that this time they’ll learn the lesson a bit better, and put some more effort into security.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

Sat, 23 May 2009

Steve Waldman has a good article over on Seeking Alpha about the difference between “transactional” and “revolving” credit. As we are in the middle of what is often described as a ‘credit crisis,’ understanding the difference between these two products is fairly important, as each have quite different benefits and hazards and create different policy implications.

0 Comments, 0 Trackbacks

[/finance] permalink

Sun, 17 May 2009

In a few months (late August), I’m planning on heading to Yellowstone National Park for a week’s worth of outdoor recreation and, I hope, many opportunities for photography in one of the most beautiful parts of the U.S. Since this is the first major photographic excursion I’ve gone on with a DSLR instead of film, I’ve been putting some thought into the contents of my gear bag.

At the moment I haven’t decided whether or not I’ll be bringing a laptop with me on the trip. If I decide to forgo a computer, I’ll either need to buy a lot more CF capacity than I have now, or get something to download the cards onto when they get full.

The price per GB on CF cards varies based on the speed of the card and the total capacity, with larger cards generally costing more then the older, smaller-capacity cards. The best deals currently going seem to be on the 100x (15MB/s) and 133x (20MB/s) cards, with a significant premium for the 266x (40MB/s) and faster UDMA varieties.

I found a 4GB 133x Kingston card for $15, and an 8GB 133x for $25, both at Adorama, and the latter with free shipping. That works out to around $3.13/GB — not too shabby, but not exactly disposably cheap.

It’s more interesting to consider the cost on a per-frame basis: each click of my Minolta Maxxum 7D’s shutter (when in RAW mode) consumes about 8.6MB, so if I were to use memory cards the same way I used to use film — treating it as a consumable, at least for the purpose of my trip — I’d be paying about 2.6 cents per image. By comparison, bulk-loaded Kodak Portra (my color film of choice) is around 3.3 cents per image, and that’s just for the stock, neglecting processing costs and any waste.

I knew digital photography was cheap, at least in terms of running costs, but that surprised me. CF cards are so inexpensive today that I could use them not only for in-camera storage but also as my archive copy, and I’d still come out ahead.

In terms of ‘film rolls,’ which is still a unit that I find myself thinking in, each 4GB card holds about 465 images or 13 35-exposure rolls. (I never shoot more than 35 frames on a roll of film, because I store them in binder pages that take 5-frame strips. Nothing more annoying than having one or two extra frames at the end of a roll that don’t fit into the binder page, forcing you to waste a second one.) If I were planning on taking my film camera, I’d probably bring 20 or 25 rolls, so I think two 4GB cards will probably do the trick.

Of course, I’ll probably take many more photos with a digital than I would with film, so it makes sense to budget more. It’s an open question in my mind whether I’ll really end up with more ‘keepers’ after the first cut than I would with film; in other words, do all the additional shots I take when I’m shooting digital actually amount to more good images, or do they just decrease the S/N ratio? One of my goals is to try and figure that out.

0 Comments, 0 Trackbacks

[/photography] permalink

Sat, 16 May 2009

I got a letter from BB&T last week, an actual paper letter, on account of owning a couple of shares of their stock. I had already heard the news that they planned to cut their dividend and issue between $1.5 and $1.7 billion in new stock to get out of TARP, but struck me as interesting that they bothered to send out notices, in the form of a letter from the CEO, to all shareholders of record.

The letter is available online here (PDF).

Most news coverage of BB&T’s decision to repay TARP has focused on the dividend reduction, and a remark by CEO Kelly King that it “marks the worst day in my 37 year career.” However, I thought the second page of the letter was really the most interesting:

Many of you have asked why we agreed to participate in the Capital Purchase Program last November. Frankly, we did not need or want the investment, but our regulators urged us along with other healthy banks to participate for the purpose of increasing lending to improve economic conditions.

Them’s fighting words right there, or at least they are by the admittedly low standards of a corporate shareholder communique. For a few months now, rumors have been circulating that healthy banks — like BB&T — were essentially forced or otherwise pressured by regulators to participate in TARP, in order to make it seem less like the plague ward than it really was. This is the first written confirmation that I’ve seen from senior management at a ‘healthy’ bank basically confirming the worst of those rumors.

The key word is that executives at BB&T didn’t “want” the TARP money from the beginning, indicating they must have been pressured or coerced — given ‘an offer they could not refuse,’ perhaps — to take it anyway. The letter doesn’t get into exactly what form that coercion took, but I suspect in time more details, beyond what are already known, will come out. Doubtless it won’t look all good for the banks when it does; in the end all the majors caved, and when they complain Treasury will accuse them of hypocrisy: buying into the plan when the going was tough, but getting buyers’ remorse now that things are looking somewhat better. This is a legitimate accusation that they’ll have to work hard to defend against. The ultimate question will be what the consequences of not participating — essentially calling the Don’s bluff — would have been, and whether they would have been preferable to what actually occurred.

The government, it seems, is going to turn a fair profit on TARP at great expense to the investors in healthy banks. (The sick banks won’t really have lost money to TARP, at least not in the same way that banks like BB&T did, because they actually needed the capital infusion to stay alive; for them it was money well spent.) I think it’s too much to expect at this point that anything will happen to recoup any of BB&T’s TARP-related losses, either the direct ones in the form of interest payments to the government, or indirect ones like the reduced dividend (which arguably they might have to have done anyway, but perhaps not — now we’ll never know) and share dilution.

There’s not much of a silver lining, but hopefully it will prove to be a ‘learning experience,’ albeit an expensive one. After having been so painfully screwed, it’s doubtful that BB&T or any of the other ‘healthy’ banks will have anything to do with similar programs to TARP in the future; whatever coercion was required to buy their participation this time around, next time they will almost certainly be tougher sales.

TARP may not have injected needed capital into the healthy banks, but it may have given them something far more important in the long run: backbone.

0 Comments, 0 Trackbacks

[/finance] permalink

Sun, 10 May 2009

When I was in perhaps 5th or 6th grade, I recall my math teacher making a halfhearted effort to get us to play The Stock Market Game as part of our curriculum. It didn’t go very well and certainly wasn’t effective as a teaching tool; I remember a couple of class periods spent pretending to make sense of the NY Times’ stock pages while actually reading the comics, and very little else.

A few years ago when I first decided to play my own little stock market game with some Excel spreadsheets, and found myself learning a whole lot more about economics and the market than I had ever really intended, I wondered why I’d never done it before. And then the memory of that abortive attempt to do exactly that came back.

It strikes me as a rather sadly missed opportunity. Until I started playing with my little virtual portfolio in Excel, I had only a very vague idea of how the equities markets worked — and this is despite having taken the two semesters of required Economics in college. Would I have picked a different major or career path as a result of getting that little bit of fundamental understanding that you gain from playing with a paper portfolio earlier? (And would that have been a good thing?) I have no idea. And really, that question doesn’t interest me that much; I have no regrets, certainly, about my actual choices with regards to education or employment.

What does interest me is trying to figure out why, despite someone’s intentions that we would use the Stock Market Game in our math class, it never ended up amounting to anything.

The first problem, I suspect, is that my poor old math teacher — who had been teaching from the same curriculum for probably 30 years — didn’t know that much more about the stock market than we the students did. (I also suspect that she wasn’t the one to decide to include it in class; it just doesn’t, and didn’t at the time, seem like her style.) That combination was deadly, right off the bat. Whatever educational utility a virtual portfolio game might have — and people are rightly skeptical of them at times — it evaporates instantly when the teacher isn’t knowledgeable and interested in it themselves.

Perhaps the root of this problem was making it part of a math class in the first place. There really isn’t that much ‘math’ involved in maintaining a paper portfolio, and what there is represents pretty basic stuff — if you’re using a stock market game to teach percents, chances are you’re not really getting into what makes the stock market interesting and important; you might as well just stick to lemonade stand examples and save students the confusion.

The stock market game would probably have fit better into the history or social-studies curriculum than into math. That might have also caused the focus of the overall lesson to be more about the equities markets themselves — how they work, why they exist, what the effects are of market fluctuations — rather than simply on generating a short-term return in a virtual portfolio. (That would also go a long way towards addressing most of the criticisms of stock market games as propaganda tools expressed in the article by Maier, which are in my opinion mostly quite valid.)

The second major problem had to do with how the game itself was executed; this being the pre-Internet era, we did everything on paper and got pricing information out of the newspaper. Hopefully this wouldn’t be a problem today; it would be simple to use Google Finance, or even the Excel sheets I played with a few years ago, to do it now, and you’d at least get pretty graphs out of the bargain.

The only benefit I can attest to as a result of having to once try to use, or at least look at, the printed financials pages, is a vast appreciation for the electronic tools that are available today to the individual investor, or even to the merely curious. From the looks of the photos on TSMG’s website, they have changed with the times. (Damn kids will just take them for granted. Get off my lawn.)

Aside from simply replacing graph paper and the Times financial section with some pretty electronic system, bringing in computers also allows for a lot more research than would have previously been possible in a classroom setting. Research and due diligence are a huge component of investing and non-technical speculation, and that’s a lesson that you don’t need to become a stock broker or day trader later on in life in order to appreciate — anyone with a 401k will do.

I think the concept of a stock market game is a pretty sound one, in terms of teaching students about a fundamental and important part of our economy — one which can, as recent events have made plain, affect their lives whether they pay attention to it or not. I can only hope that how that concept is being executed today is better than the pathetic attempt I experienced many years ago.

0 Comments, 0 Trackbacks

[/finance] permalink

Fri, 08 May 2009

Earlier today I got an instant message from a friend I haven’t talked to since 2003. Although normally I’d be pleased to hear from an old friend, the fact that the message contained nothing but a link to a web site in the .ru TLD made me suspicious.

Out of curiosity, I grabbed a copy of the page using curl, and then examined it using a text editor. This is the safest way I know of to investigate potentially-hostile web pages; even if the page exploits a flaw in your browser, chances are it’s not designed to exploit a bug in emacs or vi when it’s just being read locally. To no surprise at all, the page was nothing but a bit of JavaScript. (Which is a good reason to browse with something like NoScript enabled.)

Since I’ve just recently started to play around with JS, I thought it would be interesting to take the program apart and see what it does. For safety reasons and because I don’t want to give the malware authors any additional traffic, I’m not going to link to the original Russian site or actually host their index page, but in the interest of science, I’ve put it up on Pastebin for anyone who wishes to poke at. Just be careful and don’t run the thing outside of a sandbox.

Pastebin link to the page’s raw HTML.

They’ve done some (fairly trivial) obfuscation to hide the actual code by way of the two script elements on the page. The first <SCRIPT> defines a Decode function and includes the actual payload in a long string; the second <SCRIPT> calls the decoding function.

Their decoder:

function Decode()
{
    var temp = "", i, c = 0, out = "";
    var str = "60!33!68!79!67!84!89!...blahblahblah";
    l = str.length; 
    while (c <= str.length - 1) {
       while (str.charAt(c) != '!') {
          temp = temp + str.charAt(c++);
       }
       c++;
       out = out + String.fromCharCode(temp);
       temp = "";
   }
   document.write(out);
}
Decode();

Obviously I’ve truncated the value of str here for brevity; it’s several thousand bytes long. What we’re looking for — the actual, presumably-malicious code — is inside that string. There are a number of ways we could get at the contents, but since the malware authors have so helpfully supplied us with a decoder, why not use it? Of course, we don’t want to run it from within a browser, or using any of the online JS shells (which might — stupidly — run the code that’s being obfuscated), but the js CLI shell is a pretty safe option.

If we weren’t absolutely sure what the code was going to do when we ran it, we might want to take additional precautions, like running it inside a walled-off VM, but in this case the code to be executed is trivial.

In order to make Decode() run inside the js CLI shell instead of inside a browser’s JS environment, one small change is necessary: where the code above has document.write(out), we need to change this to a simple print(out). This writes the results to standard output when we run the decoder via js -f badscript.js > badscript.out or something similar.

What we’re left with after running this is the page that the hapless victim actually arrives at, but which the malware author attempted to hide inside the script.

I haven’t had a chance to step through the resulting page completely yet, but it seems like a mess of advertisements combined with scripts designed to make it impossible to close the page. I assume there’s probably more nastiness buried in it besides the obvious, however: since the link was sent to me automatically, it’s a good bet it has a way of propagating itself.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Tue, 05 May 2009

Finem Respice is one of my newest favorite blogs, and I’m almost embarassed to not have found it sooner. Written by an insider in the private equity world, it’s an interesting take on finance and politics that’s different from the investor’s view one gets from outlets like Seeking Alpha or Calculated Risk, or even the economist’s perspective embodied in Tyler Cowens’s Marginal Revolution. Perhaps because it’s written by someone engaged in a profession — that of PE financier — that has lately been made a scapegoat by politicians looking to distract the angry mob from their own incompetence, it doesn’t pull punches where they are deserved.

With that by way of general introduction, the article that caught my eye on FR was “Human Sacrifice At The Altar Of The Cult Of Buoyancy,” which suggests that not only is the ‘bubble mentality’ that led to the overheated technology sector and real estate market not dead, it’s driving the current “recovery” plans by the Obama administration.

Everything in the present climate and the events of the last 30 years is suggestive of bubble worship, unshakable and dangerously, even dogmatically religious in its practice, and consequences. The only aspect of the Cult of Buoyancy in opposition to its title is the lack of fringe status. The Cult of Buoyancy is mainstream.

The ‘Cult of Buoyancy’ is, in short, the belief — bordering on religious faith — that the economy ought, perhaps must, return steady gains year after year, and that if anything interrupts this giant money-printing machine, something is gravely in need of ‘correction.’ Viewed through this lens, I can imagine a believer thinking that market corrections such as the popping of the tech bubble and the current RE slide are not only undesirable, they are unnatural, and must therefore be the fault of some shadow conspiracy. (Whether they actually believe that or not is immaterial; the important point is that there are clearly people in influential positions who act as though they believe it.)

The administration is in the thrall not of the High Priests of Capitalism, but of the Cult of Buoyancy. In retrospect it is quite obvious. The administration wants banks for one purpose. To pump out more loans. Period. This perspective makes almost all of the administration’s actions perfectly logical.

This doesn’t strike me as a particularly controversial assertion. The Obama administration has come very close to saying it outright at various times; their goal is to pour enough cheap money into the banks to “restart” lending (as if it ever stopped; it just stopped being quite so dirt cheap), get the economy back on an upward trajectory via consumer spending and mortgage lending, and — although they generally don’t say this part out loud — leave the underlying problems for someone else to fix at a later date. They take the credit, maybe get a second term out of it, during which they administer another dose of painkiller and soothing words, and are long gone from DC when the patient realizes they’ve been getting morphine for the cancer that’s been steadily worsening all along.

Someone is going to have to stand up and point out to the investing public that there is no quick fix. Someone is going to have to work to start deprogramming the United States after three decades of indoctrination. So long as the Cult of Buoyancy holds such sway, we will never see rational measures to put the economy back on track. We will see the same, tired and now clearly very dangerous tools at work. Inflation. Centralized interest rate planning. Underwriting standards tinkering. Rampant consumerism. Class warfare.

Amen. Of course, I don’t have any real hope that such a ‘someone’ will ever come from Washington; our political system just isn’t designed to allow for it. The public will get milquetoast populists bearing empty platitudes until the flaws in our economy are too obvious to ignore; a point that we are probably at least another one, if not two, boom/bust cycles away from.

The ‘Cult of Buoyancy’ is but a small splinter sect of the big-tent Church of Growth, and that church includes as its adherents virtually everyone who matters in politics, and a fair share of both the Left and Right intellegensia. Even more dangerous than the belief in ‘buoyancy’ with regard to equities and commodities is the belief that the growth experienced by the United States in the 20th century can continue unabated into the next. Any policy founded on this belief, on an assumption of basically never-ending growth, is doomed to failure — possibly spectacular failure, if the policy involves critical social functions like healthcare or retirement.

The current administration’s embrace of cheap-money policies as “solutions” to what they perceive as an economic malfunction is interesting in itself, but the immediate effects of such a policy pale in comparison to its importance as a telltale of an underlying growth uber alles philosophy that makes its non-economic domestic agenda far more dangerous than it might otherwise be.

0 Comments, 0 Trackbacks

[/politics] permalink

Mon, 09 Feb 2009

This is fairly neat:

Continuous Audio Life Logs and the Personal Audio Project

This is similar to the photo-based project done by Gordon Bell, well-covered in the press, except with continuous audio recordings instead of still photos. That makes it a lot more practical, and somewhat less intrusive/creepy.

I haven’t delved too far into the papers, but looking just at the graphic on the linked page above, it looks like one of the things they’re doing is cross-referencing audio logs to PIM scheduling information. That struck me as a fairly simple way to instantly make them a whole lot more useful.

Just a neat example of how linking two pieces of information, both of limited usefulness on their own, can create a much more useful resource; one that’s greater than the literal sum of its parts.

0 Comments, 0 Trackbacks

[/technology] permalink

The same problem with the Linksys SPA-2102 failing to register cropped up again today. Despite switching from port 5060 to 5061 last week, it stopped working again today.

Switching up to port 5062 solved the problem immediately. I didn’t try switching down to 5060 to see if that was working again, although maybe I should have. I think my VOIP provider listens on all ports between 5060 and 5080, so if they’re not being reopened I’ll have a little while (at one port per week) before I’m in real trouble, but it will eventually become an issue.

I’ve been thinking about ways to test and see whether Comcast really is interfering with outgoing packets on certain ports, but haven’t come up with anything that seems like a really good test. The Java-based SIP test pages, of which there are many versions around, don’t seem to be cutting it. I could run netcat on a remote host somewhere, but unfortunately, most of the computers I have access to are on Comcast’s own network — so it wouldn’t tell me anything if Comcast is performing filtering at their upstream gateway.

hping seems like it might also be of some use, although I haven’t a clue about how to use it.

0 Comments, 0 Trackbacks

[/technology/voip] permalink

Mon, 02 Feb 2009

Earlier today I ran into a problem with my Linksys SPA-2102 VOIP ATA that strongly suggests Comcast may be intermittently blocking outgoing connections on port 5060, the most common port used for SIP signaling.

The problem stems from the Linksys failing to ‘register’ with the VOIP termination provider, meaning that there’s no dialtone, no way to make outgoing calls, and all incoming calls get forwarded to my backup number. In the 2102’s logs (via syslog running on a separate machine), the following messages are repeated over and over, at about 30s intervals:

Feb  1 06:28:14 192.168.1.150 RSE_DEBUG: unref domain, _sip._udp.callcentric.com
Feb  1 06:28:14 192.168.1.150 RSE_DEBUG: last unref for domain _sip._udp.callcentric.com
Feb  1 06:28:44 192.168.1.150 RSE_DEBUG: reference domain:_sip._udp.callcentric.com
Feb  1 06:29:16 192.168.1.150 RSE_DEBUG: getting alternate from domain:_sip._udp.callcentric.com
Feb  1 06:29:16 192.168.1.150 [0]Reg Addr Change(0) cc0bc025:5080->cc0bc017:5080
Feb  1 06:29:16 192.168.1.150 [0]Reg Addr Change(0) cc0bc025:5080->cc0bc017:5080
Feb  1 06:29:48 192.168.1.150 RSE_DEBUG: getting alternate from domain:_sip._udp.callcentric.com
Feb  1 06:29:48 192.168.1.150 [0]RegFail. Retry in 30

On the surface, this looks like it might be a DNS problem; something with the SRV records not resolving right. But that can easily be tested using utilities like dig — and everything checked out fine, using both my ISP’s DNS servers and publicly-available ones. Everything seemed to resolve right, and my VOIP provider (Callcentric) claimed everything was working okay on their end.

Turning on SIP logging in the 2102 gave me some additional information to work with. This feature (called ‘SIP Debug’ and enabled on the ‘Line’ tab when looking at the advanced configuration menus of the 2102) shows the content of the SIP packets being sent out or received by the ATA. As soon as I turned this on, it became clear that there were packets going out every few seconds to the correct addresses, and those packets had the correct information in them (my WAN address) for replies, but nothing was coming back in from Callcentric.

This, at the very least, is pretty suspicious. Unfortunately, it’s difficult to test and determine exactly where the packets are being dropped. Sending ICMP “echo request” packets, as the ping utility does, tests the path to a particular host, but does not test for port-based filtering along the route.

After several hours of fruitless messing around, and several messages back and forth to Callcentric’s support team (who advised me to update the firmware on the Linksys, to no effect beyond enabling a few options that weren’t there before), I decided to try changing the SIP port. The default — and the one I was using — was port 5060.

As soon as I changed the port from 5060 to 5061, like magic, everything suddenly started working. The Linksys registered, the messages stopped, and I got dialtone.

Nothing within my LAN would have been blocking port 5060, either incoming or outgoing — the Linksys was in the DMZ and the only specifically forwarded ports are in the under-500 range — certainly nothing would have been blocking 5060 but not 5061. (No other port forwards, no UPnP rules, etc.) From all the information I can gather on my end, it certainly looks as though something was stopping the SIP registration packets on port 5060 from getting through, meaning my ATA was sending them out endlessly without getting any response, but wasn’t blocking 5061. This explains why everything ‘just worked’ as soon as I made the change.

It may be that there’s a perfectly reasonable explanation for the blocking, but I can’t think of too many, while I can think of a lot of nefarious ones. Comcast has its own VOIP service, and it would make sense for them to impair the reliability of other offerings in order to make their own — which runs over dedicated channels on the cable infrastructure — seem superior. Googling for “comcast port 5060” shows that I’m not the only one who suspects this. It’s clear they’re not doing it everywhere, but from time to time they seem to decide to close off a particular port to a user without any warning. (Giving them the benefit of the doubt: maybe they have some sort of overzealous automatic system that’s to blame?)

The obvious way to resolve the question would be to call Comcast and see if they’ll admit to blocking anything, but given that it’s a Sunday and that dealing with Comcast always feels like it takes years off my life, I’m probably not going to bother unless 5061 stops working, too. Plus, to really implicate Comcast I’d need to eliminate any possibility of my router being the source of the dropped packets, by connecting the VOIP ATA directly to the cable modem. But that would involve too much disruption for me to really consider it, just to sate my curiosity.

However, if anyone out there is running into the “unref domain” registration-failure error, and it’s clear that it’s not a DNS or LAN/gateway issue, I’d suggest changing the SIP port and seeing if that fixes the problem.

0 Comments, 0 Trackbacks

[/technology/voip] permalink

Wed, 21 Jan 2009

I finally got around today to making some minor tweaks to the site; I changed the front page a bit, to make it clear that most content is here in the blog and not anywhere else, and I monkeyed around with the CSS that’s behind the scenes a little, to try and make the site look less ugly on mobile browsers.

Unfortunately it’s still pretty ugly on mobiles, and I should really do a lot more, but that would involve digging into the blog templates that I haven’t looked at in a couple of years, and that just seems a bit too much like work for a hobby project that I’m pretty sure nobody reads anyway. So for now, it is what it is.

Eventually I may stick a “Contact Me” link on the main index page, if I ever get around to putting together a decent webform that won’t get me spammed, allow others to use it to send spam, or be too onerous for casual drive-bys to use. What I’m thinking of is something that automatically encrypts messages sent via the form to me, using my PGP public key; that would make it pretty useless for most kinds of spam, as well as giving it some security (if the encryption were done in JavaScript, on the client side). Maybe sometime later this month or next, if work is slow.

If anyone happens to notice any bugs in the CSS, please let me know by posting a comment below; I have tried to test it on a few browsers (and it’s pretty dead simple to boot), but I’ve heard IE has some strange bugs in its CSS rendering that can make even simple layouts barf.

0 Comments, 0 Trackbacks

[/meta] permalink

Tue, 20 Jan 2009

I just finished reading an email from one of the sysops here on the SDF (the system this website is hosted on, among many other more important things) announcing that the SDF’s original hardware has found a new home at the Museum of Communications in Seattle, Washington.

I’d never heard of the museum before, but it seems like a very cool place and it’s definitely on my list for the next time I’m on that coast with a Tuesday (it’s only open on Tuesdays) to spare. Aside from the SDF hardware — an AT&T 3B2 — they also seem to have a huge variety of telephone and telecommunications-related equipment, spanning decades.

Their collection of central office equipment is perhaps the most impressive, especially because they are — at least according to the web site — all operational. (Since the organization behind the museum is composed at least in large part of ex-telecom workers, this is not as hard to believe as it might otherwise be.) Check out all the Strowger switches! And as a bonus, the museum is located entirely inside a working switching center — probably one of the only times you’ll ever set foot in one. (The modern digital switches are not on display.)

Unfortunately, I probably won’t be making it out to Seattle any time soon, but I’m glad to see this sort of history being preserved so competently.

0 Comments, 0 Trackbacks

[/technology] permalink