Kadin2048's Weblog

RSS

Mon, 14 Jul 2014

An apparently common issue with Outlook for Mac 2011 is crazily high CPU usage, enough to spin up the fans on a desktop machine or drain the battery on a laptop, when Outlook really shouldn’t be doing anything.

If you do some Googling, you’ll find a lot of people complaining and almost as many recommended solutions. Updating to a version after 14.2 is a typical suggestion, as is deleting and rebuilding your mail accounts (ugh, no thanks).

Keeping Outlook up to date isn’t a bad idea, but the problem still persisted with the latest version as of today (14.4.3).

In my case, the high CPU usage had something to do with my Gmail IMAP account, which is accessed from Outlook alongside my Exchange mailbox. Disabling the Gmail account stopped the stupid CPU usage, but that’s not really a solution.

What did work was using the Progress window to see what Outlook was up to whenever the CPU pegged. As it turned out, there was a particular IMAP folder — the ‘Starred’ folder, used by both Gmail and Outlook for starred and flagged messages, respectively — which was being constantly refreshed by Outlook. It would upload all the messages in the folder to Gmail, then quiesce for a second, then do it over again. Over and over.

Outlook’s IMAP implementation is just generally bad, and this seems to happen occasionally without warning. But the Outlook engineers seem to have anticipated it, because if you right-click on an IMAP folder, there’s a helpful option called “Repair Folder”. If you use it on the offending folder, it will replace the contents of the local IMAP store with the server’s version, and break the infinite-refresh cycle.

So, long story short; if you have high-CPU issues with Outlook Mac, try the following:

  1. Update Outlook using the built-in update functionality. See if that fixes the issue.
  2. Use the Progress window to see what Outlook is doing at times when the CPU usage is high. Is it refreshing an IMAP folder?
  3. If so, use the Repair Folder option on that IMAP folder, but be aware that any local changes you’ve made will be lost.

And, of course, lobby your friendly local IT department to use something that sucks less than Exchange.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 08 Sep 2013

After reading through some — certainly not all, and admittedly not thoroughly — of the documents and analysis of the NSA “BULLRUN” crypto-subversion program, as well as various panicky speculation on the usual discussion sites, I can’t resist the temptation to make a few predictions/guesses. At some point in the future I’ll revisit them and we’ll all get to see whether things are actually better or worse than I suspect they are.

I’m not claiming any special knowledge or expertise here; I’m just a dog on the Internet.

Hypothesis 1: NSA hasn’t made any fundamental breakthroughs in cryptanalysis, such as a method of rapidly factoring large numbers, which render public-key cryptography suddenly useless.

None of the leaks seem to suggest any heretofore-unknown abilities that undermine the mathematics that lie at the heart of PK crypto (trapdoor functions). E.g. a giant quantum computer that can simply brute-force arbitrarily large keys in short amounts of time. In fact, the leaks suggest that this capability almost certainly doesn’t exist, or else all the other messy stuff, like compromising various implementations, wouldn’t be necessary.

Hypothesis 2: There are a variety of strategies used by NSA/GHCQ for getting access to encrypted communications, rather that a single technique.

This is a pretty trivial observation. There’s no single “BULLRUN” vulnerability; instead there was an entire program aimed at compromising various products to make them easier to break, and the way this was done varied from case to case. I point this out only because I suspect that it may get glossed over in public discussions of the issue in the future, particularly if there are especially egregious vulnerabilities that were inserted (as seems likely).

Hypothesis 3: Certificate authorities are probably compromised (duh)

This is conjecture on my part, and not drawn directly from any primary source material. But the widely-accepted certificate authorities that form the heart of SSL/TLS PKI are simply too big a target for anyone wanting to monitor communications to ignore. If you have root certs and access to backbone switches with suitably fast equipment, there’s no technical reason why you can’t MITM TLS connections all day long.

However, MITM attacks are still active rather than passive, and probably unfeasible even for the NSA or its contemporaries on a universal basis. Since they’re detectable by a careful-enough user (e.g. someone who actually verifies a certificate fingerprint over a side channel), it’s likely the sort of capability that you keep in reserve for when it counts.

This really shouldn’t be surprising; if anyone seriously thought, pre-Snowden, that Verisign et al wouldn’t and hadn’t handed over the secret keys to their root certs to the NSA, I’d say they were pretty naive.

Hypothesis 4: Offline attacks are facilitated in large part by weak PRNGs

Some documents allude to a program of recording large amounts of encrypted Internet traffic for later decryption and analysis. This rules out conventional MITM attacks, and implies some other method of breaking commonly-used Internet cryptography.

At least one NSA-related weakness seems to have been the Dual_EC_DRBG pseudorandom number generator specified in NIST SP 800-90; it was a bit hamhanded as these things go because it was discovered, but it’s important because it shows an interest.

It is possible that certain “improvements” were made to hardware RNGs, such as those used in VPN hardware and also in many PCs, but the jury seems to be out right now. But compromising hardware makes somewhat more sense than software, since it’s much harder to audit and detect, and it’s also harder to update.

Engineered weaknesses inside [P]RNG hardware used in VPN appliances and other enterprise gear might be the core of NSA’s offline intercept capability, the crown jewel of the whole thing. However, it’s important to keep in mind Hypothesis 2, above.

Hypothesis 5: GCC and other compilers are probably not compromised

It’s possible, both in theory and to some degree in practice, to compromise software by building flaws into the compiler that’s used to create it. (The seminal paper on this topic is “Reflections on Trusting Trust” by Ken Thompson. It’s worth reading.)

Some only-slightly-paranoids have suggested that the NSA and its sister organizations may have attempted to subvert commonly-used compilers in order to weaken all cryptographic software produced with them. I think this is pretty unlikely to have actually been carried out; it just seems like the risk of discovery would be too high. Despite the complexity of something like GCC, there are lots of people looking at it from a variety of organizations, and it would be difficult to subvert all of them while harder still to insert an exploit that would have been completely undetected. In comparison, it would be relatively easy to convince a single company producing ASICs to modify a proprietary design. Just based on bang-for-buck, I think that’s where the effort is likely to have been.

Hypothesis 6: The situation is probably not hopeless, from a security perspective.

There is a refrain in some circles that the situation is now hopeless, and that PK-cryptography-based information security is irretrievably broken and can never be trusted ever again. I do not think that this is the case.

My guess — and this is really a guess; it’s the thing that I’m hoping will be true — is that there’s nothing fundamentally wrong with public key crypto, or even in many carefully-built implementations. It’s when you start optmizing for cost or speed that you open the door.

So: if you are very, very careful, you will still be able to build up a reasonably-secure infrastructure using currently available hardware and software. (‘Reasonably secure’ meaning resistant to untargeted mass surveillance, not necessarily to a targeted attack that might include physical bugging: that’s a much higher bar.) However, some code may need to be changed in order to eliminate any reliance on possibly-compromised components, such as hardware RNGs / accelerators that by their nature are difficult to audit.

Large companies that have significant investments in VPN or TLS-acceleration hardware are probably screwed. Even if the gear is demonstrably flawed, look for most companies to downplay the risk in order to avoid having to suddenly replace it.

Time will tell exactly what techniques are still safe and which aren’t, but my WAG (just for the record, so that there’s something to give a thumbs-up / thumbs-down on later) is that TLS in FIPS-compliance mode, on commodity PC hardware but with hardware RNGs disabled or not present at both ends of the connection, using throwaway certificates (e.g. no use of conventional PKI like certificate authorities) validated via a side-channel, will turn out to be fairly good. But a lot of work will have to be invested in validating everything to be sure.

Also, my overall guess is that neither the open-source world or the commercial, closed-source world will come out entirely unscathed, in terms of reputation for quality. However, the worst vulnerabilities are likely to have been inserted where there were the least number of eyes looking for them, which will probably be in hardware or tightly integrated firmware/software developed by single companies and distributed in “compiled” (literally compiled or in the form of an ASIC) form only.

As usual, simpler will turn out to be better, and generic hardware running widely-available software will be better than dedicated boxes filled with a lot of proprietary silicon or code.

So we’ll see how close I am to the mark. Interesting times.

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 22 Feb 2013

I’ve recently (re)taken up cycling in a fairly major way, and have been surprised by how much I’ve enjoyed it. One of the things that’s making it more fun this time around, as compared to previous dabblings in years past, is the various ways that you can measure and quantify your progress — not to mention your suffering — and compare it with others, etc.

For example, a recent ride taken with a few friends:

Time: 01:54:50
Avg Speed: 13.5 mi/h
Distance: 25.8 mi
Energy Output: 826 kJ
Average Power: 120 W

Now, 120 W is really not especially great from a competitive cycling perspective; better riders routinely output 500-ish watts. But it struck me as being pretty efficient: for all my effort, the ride actually only required the same amount of power to propel me on my way as would have been required by two household light bulbs.

So that got me thinking: just how efficient is cycling?

My 25.8 mi / 41.5 km roundtrip ride required 826 kJ, if we believe Strava; that’s mechanical energy at the pedals. (I unfortunately don’t have a power meter on my bike, so this is a bit of an estimate on Strava’s part, taking into account my weight, my bike’s weight, my speed, the elevation changes on the route, etc.)

That’s about the same as the energy released by 1.7 grams of combusted gasoline, per Wolfram Alpha. If I ran on gasoline, I’d be able to carry enough in my water bottle to ride across the U.S. more than 3 times (7,813 miles worth).

Of course, cars aren’t perfectly efficient in their use of gasoline, and I’m not a perfectly efficient user of food calories. Strava helpfully estimates the food-calorie expenditure of my ride at 921 Calories, which is 3.85 MJ, leading to a somewhat disappointing figure of only 21.4% overall efficiency. (Disappointing only in the engineering sense; from an exercise perspective I’d really rather it be low.)

Though it’s about on par with a car, interestingly enough. The Feds give anywhere between 14-26% as a typical ‘tank-to-tread’ efficiency figure for a passenger car, with most losses in the engine itself.

So if I were able to drink gasoline and use it at least as efficiently as a car, my water bottle would get me about a thousand miles. (1,094 mi or 1,760 km, using the low-end 14% efficiency figure for a car.) Still pretty good, considering that my own car would only get about 5 miles on the same amount of fuel (24 fl oz at 25 MPG).

Of course, a car isn’t an especially fair comparison — it has a lot of overhead both in terms of mass, rolling resistance (more, lower-pressure tires), and air resistance (higher cross-sectional area). Some sort of small motorbike would be a better comparison, and there I suspect you’d start to see an even playing field.

Maybe that’s my argument for getting a motorcycle…

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 26 Sep 2012

I recently had a hardware failure, and decided to take the opportunity to upgrade my aging home server from Ubuntu ‘Dapper Drake’ to Scientific Linux. The reasons for my move away from Ubuntu are an article unto themselves, but it boils down to what I see as an increasing contempt for existing users (and pointless pursuit of hypothetical tablet users — everybody wants to try their hand at being Apple these days, apparently unaware that the role has been filled), combined with — and this is significantly more important — the fact that I have been using RPM-based distros far more often at work than Debian/APT-based ones, despite the many advantages of the latter. Anyway, so I decided to switch the server to SL.

The actual migration process wasn’t pretty and involved a close call with a failing hard drive which I won’t bore you with. The basic process was to preserve the /home partition while tossing everything else. This wasn’t too hard, since SL uses the same Anaconda installer as Fedora and many other distros. I just told it to use my root partition as /, my home partition as /home, etc.

And then I rebooted into my new machine. And seemingly everything broke.

The first hint was on login: I got a helpful message informing me that my home directory (e.g. /home/myusername) didn’t exist. Which was interesting, because once logged in I could easily cd to that directory, which plainly did exist on the filesystem.

The next issue was with ssh: although I could connect via ssh as my normal user, it wasn’t possible to use public key auth, based on the authorized_keys file in my home directory. It was as though the ssh process wasn’t able to access my home directory…

As it turned out, the culprit was SELinux. Because the “source” operating system that I was migrating from didn’t have SELinux enabled, and the “destination” one did, there weren’t proper ‘security contexts’ (extended attributes) on the files stored on /home.

The solution was pretty trivial: I had to run # restorecon -R -v /home (note as root!), which took a few minutes, and then everything worked as expected. This was something I only discovered after much searching, on this forum posting regarding a Fedora 12 install. I’m noting it here in part so that perhaps other people in the future can find it more easily. And because, unfortunately, there are forums filled with people experiencing the same problem and receiving terrible advice that they need to reformat /home (in effect, throw away all their data) in order to upgrade or change distros.

Bottom line: if you are running into weird issues on login (console or SSH) after an upgrade from a non-SELinux distro to a SELinux-enabled one, try rebuilding the security context before taking any drastic steps.

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 01 Aug 2012

Lockheed is apparently working on a next-generation carrier based unmanned fighter aircraft, the “Sea Ghost.” At least, they are “working” on it in the sense that they paid some graphic designer to make some CGI glamour shots of something that might be a UAV, sitting on the deck of what is presumably an aircraft carrier. As press releases go it’s a little underwhelming, but whatever.

From the rendering, it appears that the Sea Ghost is a flying wing design, which is interesting for a number of reasons. Flying wings are almost as old as aviation in general, but have proved — with a few notable exceptions — to be largely impractical, despite having some nice advantages on paper over the traditional fuselage-plus-wings monoplane design. It’s one of those ideas that’s just so good that, despite a sobering list of failures, it just won’t die.

One of the big problems with flying wings is yaw control. Since they lack a tail and traditional rudder, getting the aircraft to rotate on the horizontal plane is difficult. Typically — in the case of the B2, anyway — this is accomplished by careful manipulation of the ailerons to create drag on one wing, while simultaneously compensating on the other side in order to control roll. This is, to put it mildly, a neat trick, and it’s probably the only reason why the B2 exists as a production aircraft (albeit a really expensive one).

I suspect that the Sea Ghost is built the same way, if only because it’s been proven to work and the Lockheed rendering doesn’t show any other vertical stabilizer surfaces that would do the job.

But a thought occured to me: if you can make a drone small and light enough (actually, a small enough moment of inertia), you don’t need to do the B2 aileron trick at all. You could maneuver it like a satellite. That is, by using a gyroscope not simply to sense the aircraft’s change in attitude, but actually to make it move about the gyroscope. Simply: you spin up the gyro, and then use servos to tilt the gimbal that the gyro sits in. The result is a force on the airframe opposite the direction in which the gyro’s axis is moved. With multiple gyros, you could potentially have roll, pitch, and yaw control.

This isn’t practical for most aircraft — aside from helicopters which do it naturally to a degree — because they have too much inertia, and the external forces acting against them are too large; the gyroscope you’d need to provide any sort of useful maneuvering ability would either make the plane too heavy to fly, or take up space needed for other things (e.g. bombs, in the case of most flying wing aircraft). And that might still be the case with the Sea Ghost, but it’s not necessarily the case with all drones.

The smaller, and more importantly lighter, the aircraft the easier it would be to maneuver with a gyroscope rather than external aerodynamic control surfaces. Once you remove the requirement to carry a person, aircraft can be quite small.

It wouldn’t surprise me if you could maneuver a small hobbyist aircraft with a surplus artificial horizon gyro. To my knowledge, nobody has done this yet, but it seems like a pretty straightforward merger of existing technology. You’d need a bunch of additional MEMS gyros, which are lightweight, to sense the change in attitude and stop and start the manuevering gyro’s movement, but there’s nothing that seems like an obvious deal-breaker.

The advantage of such a system would be that there’s no change to the outside skin of the aircraft in order to make it maneuver (within the limits of the force provided by the gyro). That would mean a lower radar cross section, and potentially less complexity and weight due to fewer moving parts in the wings.

Just one of the many intriguing possibilities you come up with, when you take 80 kilos of human meat out of the list of requirements.

Almost enough to get me back into model airplanes again.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 08 Apr 2012

For no particularly good reason, I decided I wanted to play around with IBM VM earlier this weekend. Although this would seem on its face to be fairly difficult — VM/370 is a mainframe OS, after all — thanks to the Hercules emulator, you can get it running on either Windows or Linux fairly easily.

Unfortunately, many of the instructions I found online were either geared towards people having trouble compiling Hercules from source (which I avoided thanks to Ubuntu’s binaries), or assume a lot of pre-existing VM/370 knowledge, or are woefully out of date. So here are just a couple of notes should anyone else be interested in playing around with a fake mainframe…

Some notes about my environment:

  • Dual-core AMD Athlon64 2GHz
  • 1 GB RAM (yes, I know, it needs more memory)
  • Ubuntu 10.04 LTS, aka Lucid

Ubuntu Lucid has a good binary version of Hercules in the repositories. So no compilation is required, at least not for any of the basic features that I was initially interested in. A quick apt-get hercules and apt-get x3270 were the only necessities.

In general, I followed the instructions at gunkies.org: Installing VM/370 on Hercules. However, there were a few differences. The guide is geared towards someone running Hercules on Windows, not Linux.

  • You do not need to set everything up in the same location as the Hercules binaries, as the guide seems to indicate. I created a vm370 directory in my user home, and it worked fine as a place to set up the various archives and DASD files (virtual machine drives).

  • The guide takes you through sequences where you boot the emulated machine, load a ‘tape’, reboot, then load the other ‘tape’. When I did this, the second load didn’t work (indefinite hang until I quit the virtual system from the Hercules console). But after examining the DASD files, it seemed like the second tape had loaded anyway, but the timestamp indicated that it had loaded at the same time as the first tape. I think that they both loaded one after the other in the first boot cycle — hard to tell for sure at this point, but don’t be too concerned if things don’t seem to work as described; I got a working system anyway. Update: The instructions work as described; I had a badly set-up DASD file that was causing an error, which did not show itself until later when I logged in and tried to start CMS.

  • To get a 3270 connection, I had to connect to 127.0.0.1 on port 3270; trying to connect to “localhost” didn’t work. I assume this is just a result of how Hercules is set up to listen, but it caused me to waste some time.

  • The tutorial tells you to start Hercules, then connect your 3270 emulator to the virtual system, then run the ipl command; the expected result is to see the loader on the 3270. For me, this didn’t work… the 3270 display just hung at the Hercules splash screen. To interact with the loader, I had to disconnect and reconnect the 3270 emulator. So, rather than starting Hercules, connecting the 3270, then ipl-ing, it seems easier to start Hercules, ipl, then connect and operate the loader.

Of course, when you get through the whole procedure, what you’ll have is a bare installation of VM/370… without documentation (or extensive previous experience), you can’t do a whole lot. That’s what I’m figuring out now. Perhaps it’ll be the subject of a future entry.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Fri, 16 Mar 2012

After switching from my venerable Nexus One to a new Samsung Galaxy SII (SGS2) from T-Mobile, I was intrigued to discover that it has a fairly neat WiFi calling ability. This feature lets the phone use a wireless IP access point to place calls, in lieu of the normal cellular data network. On one hand it’s a bit of a ripoff — even though you’re using your own Internet rather than T-Mobile’s valuable spectrum, they still use up your minutes at the same rate; however, it’s nice if you travel to a place with crummy cell service but decent wireless Internet.

When the feature is enabled, the phone will switch preferentially to WiFi for all calls, once it pairs to an access point. (It can be disabled if you’d prefer it to not do this.) There are still some very rough edges: the biggest issue is that there’s no handoff, so if you place a call over WiFi and then walk out of range of the AP, the call drops. Whoops.

I was curious how the calls were actually handled on the wire, and in particular how secure things were. To this end, I decided to run a quick Wireshark analysis on the phone, while it was connected to my home WiFi AP.

The setup for this is pretty trivial, and out of scope of this entry; basically you just need to find a way to get the packets going to and coming from the phone to be copied to a machine where you can run Wireshark or tcpdump. You can do this with an Ethernet hub (the old-school method), via the router’s configuration, or even via ARP spoofing.

With Wireshark running and looking at the phone’s traffic, I performed a few routine tasks to see what leaked. The tl;dr version of all of this? In general, Android apps were very good about using TLS. There wasn’t a ton of leakage to a would-be interceptor.

Just for background: Gmail and Twitter both kept almost everything (except for a few generic logo images in Twitter’s case) wrapped in TLS.

Facebook kept pretty much everything encrypted, except for other users’ profile images, which it sent in the clear. This isn’t a huge issue, but it does represent minor leakage; the reason for this seems to be that Facebook keeps the images cached on a CDN, and the CDN servers don’t do SSL, apparently. I’m not sure what sort of nastiness or attacks this opens up, if any (perhaps social engineering stuff, if a motivated attacker could recover your friends list), but it’s worth noting and keeping in mind.

I next confirmed that text messages (SMSes) aren’t sent in the clear. They are not, although I’m not 100% sure they’re even sent over the data connection — it’s hard to tell, among the SIP keepalives, whether a SMS went out via the WiFi connection, or if the phone used the actual cell-data connection instead. Sometime when I’m in a location without any GSM coverage but with WiFi, I’ll have to test it and confirm.

Last, I made a quick call. This is what I was most interested in, since encrypted SIP is surprisingly uncommon — most corporate telephony systems don’t support it, at least not that I’ve seen or worked with. It wouldn’t have surprised me much at all if the SIP connection itself was all in the clear. However, that doesn’t seem to be the case. The call begins with a sip-tls handshake, and then there are lots of UDP packets, all presumably encrypted with a session key negotiated during the handshake. At any rate, Wireshark’s built-in analysis tools weren’t able to recover anything, so calls are not script-kiddie vulnerable. Still, I’m curious about what sort of certificate validation is done on the client side, and how the phone will react to forged SSL certs or attempts by a MITM to downgrade the connection.

Certainly lots of room for further experiments, but overall I’m relieved to see that the implementation isn’t obviously insecure or vulnerable to trivial packet sniffing.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink

Mon, 20 Jun 2011

I’ve been following the Mt. Gox security breach and subsequent Bitcoin/USD price collapse for a little while. This is a rough summary of events as they seem to have happened, based on available information at the current time (June 20, early morning UTC).

My assumption is that at least some of this timeline will turn out to be wrong, which in itself might be interesting in retrospect.

Sometime in early June: Unspecified attackers gained access to a machine, allegedly being used by an auditor, either containing or with read-only access to, the Mt. Gox database or some portion of it. Whether the attackers had access to the entire database or “just” the user table doesn’t seem known, but the important thing is that they got a table containing, according to Mt. Gox:

  • Account number
  • Account login
  • Email address
  • Encrypted password

For accounts not accessed in the last two months (viewed by Mt. Gox as “inactive”), the password was stored as an MD5 hash. For accounts accessed in the last two months, the password was salted, then hashed with MD5. Nowhere in the database were there plaintext passwords.

Exactly who had access to the database, whether it was an individual or group, isn’t known. It seems that access to the database might have gone through several stages: presumably from the person or group who obtained it initially from the compromised machine, and then to less-sophisticated people or groups. We can say with some confidence that it started to be distributed shortly before June 17th, because on that date somebody posted a message to a forum with some hashed passwords that came from the database. (N.B., this is hearsay from the #Bitcoin IRC channel, and thus fairly speculative. I haven’t looked at a copy of the database to confirm it.)

Monday, June 13: The actual theft of Bitcoins from compromised accounts began, according to various sources, on Monday morning. Approximately 25k BTC were transferred from 478 accounts, according to DailyTech (although elsewhere in the same article they claim 25,000 accounts). The destination address was “1KPTdMb6p7H3YCwsyFqrEmKGmsHqe1Q3jg”.

Presumably, the accounts were accessed by brute-forcing the hashed passwords in the database. It’s not clear to me whether the accounts were all “inactive” (and thus had unsalted password hashes, vulnerable to a pre-computation attack), or if they were active, had salted hashes, but were just weak and fell to a dictionary attack. It probably would have been logical for the attackers to pursue both routes at once: go after the old, unsalted hashes with Rainbow tables, while at the same time performing dictionary attacks against the salted hashes associated with accounts with significant BTC balances. At any rate, using some combination of both routes, they eventually found some vulnerable accounts.

The thefts seem to have gone on during the remainder of the week, with Mt. Gox seemingly misreading the increase in theft reports as insecurity on users’ PCs, rather than a security problem on their end.

Sunday, June 19: The Bitcoin ‘Flash Crash’.

At around 3AM Japan Standard Time, someone — my guess is not one of the original attackers — began a massive sell-off from a single compromised account. (One open question is whether this account was a receiver account for stolen BTC from other hacked accounts, or just happened to be a ‘whale’ that they managed to access.) This is where things start to get interesting, because it’s not immediately obvious why someone who recently came into possession of a whole lot of Bitcoins would want to crash the price.

One theory is that it wasn’t intentional; they were hurrying, perhaps working against other attackers who had access to the same database, and wanted to cash out quickly. But another theory, one that I think is more plausible, is that the sell-off was calculated to crash the BTC price, in order to get around Mt. Gox’s $1,000 USD/day withdrawal limit.

By dumping a large number of Bitcoins onto the market — not just once but twice (the attacker repurchased and sold the lot of coins a second time, supposedly) — the market price was driven down. Basically all open bids on the order book were filled, down to ridiculously low prices. At no point did any sort of ‘safety switch’ kick in at Mt. Gox to halt trading; it was full-bore Black Monday mode.

And here we start to run into my limit of knowledge. If we assume that the crash was engineered in order to get around the Mt. Gox withdrawal limit, then when the price was very low, the attackers should have made their move, and transferred whatever they could out of Mt. Gox, to external Bitcoin accounts.

Mt. Gox seems to be claiming that this did not happen, and the withdrawal limits successfully kept the total amount of BTC removed from the exchange to some low number. If true, this would allow them to ‘reset’ the exchange back to how it was before the flash crash, with only limited losses — perhaps low enough that Mt. Gox could make all users whole before resuming trading.

But if this isn’t the case, then it may not be possible for Mt. Gox to shield all of its users from losses. After all, one of the key features of Bitcoins is that they can’t simply be magic-ed into existence on demand by a central authority when convenient. If the Bitcoins have left the building, so to speak, Mt. Gox can’t just grab them back or create new ones to replace them.

In the next few hours or days, I expect these issues to become more clear. Also, it will be interesting to see whether the BTC/USD rate stays at the $17 mark that Mt. Gox plans to resume trading at, or immediately falls to some lower level, in keeping with lowered investor confidence.

Personally, I wouldn’t mind one bit if this marked the end of Bitcoin’s first speculative bubble; most of my interest in Bitcoin is as a currency, not as an instrument for speculative investment (and a not-very-liquid one at that). The question will be whether Bitcoin’s reputation is irretrievably damaged as a result, or if the damage is forgotten about or limited to Mt. Gox.

Certainly more interesting and higher stakes than the usual EVE Online drama, though.

0 Comments, 0 Trackbacks

[/finance] permalink

Wed, 01 Jun 2011

As is perhaps evident from some of my other posts, I’m kind of a sucker for alternative currencies. A couple of years ago I watched the trainwreck that was the demise of 1MDC, a ‘currency’ that was backed by EGold (which was itself shut down in 2009). And then there’s the sad saga of the Liberty Dollar, which in retrospect probably would have avoided a lot of legal trouble if it had been called the ‘Liberty Peso’ or something a bit less official.

Liberty Dollars and EGold (and its spawn, e.g. 1MDC) were, until recently, arguably the high-water marks for private currencies in the U.S., in modern times anyway. However, both of them suffered crucial flaws: they were built around centralized institutions which created single points of failure. When they eventually aroused the attentions of the authorities — as any private currency is likely to do — they were pretty quickly taken down.

In the case of someone holding physical Liberty Dollars this wasn’t really catastrophic, since they still had the coins. (Even morons who bought them at terribly inflated prices might have come out ahead, due to the run-up in commodities prices in the last few years, if they held out long enough.) However, “holders” of EGold were right out; they had to wait until mid-2010 to be able to get their money out, and then only by identifying themselves.

One would not have been faulted for thinking that the idea of private currencies, existing in parallel to government-backed ones, was finished.

But it’s instructive to consider why EGold was designed the way it was, with a centralized architecture. If we give its developers any benefit of the doubt at all, they must have realized this was a gaping vulnerability. But it was a necessity for two reasons:

  1. They wanted to back their currency with a physical commodity, namely gold.

  2. They wanted to be able to make money on it.

The point I’m (rather laboriously) making my way around to, is that neither of these are true for all private currencies, and Bitcoin in particular seems to avoid them.

Bitcoins aren’t backed by anything. Unlike EGold and Liberty Dollars, both backed (either directly or indirectly) by gold, Bitcoins aren’t backed by anything. They have exactly zero intrinsic value. While that makes them rather volatile, it also means there’s no warehouse full of metal to be inconveniently seized.

Second, there doesn’t seem to be much in the way of a profit motive behind Bitcoin’s development. Both Liberty Dollar and EGold seem, on their face, to be money-making ventures for those behind them. Liberty Dollars were sold, at a premium above their intrinsic value, by NORFED; EGold charged management fees, presumably in excess of its costs to have some gold bars stored in a vault. PayPal, which is admittedly not a private currency, makes money via transaction fees. All of those models require a centralized architecture in order to generate revenue.

Bitcoin’s architecture eliminates the potential for a Bitcoin, Inc. IPO, but in doing so it is significantly more difficult to shut down.

One area where Bitcoin seems to remain vulnerable is in its convertibility to traditional currencies, especially USD. Although it’s possible in theory to ‘bootstrap’ a currency (particularly one with a fixed number of tokens) that’s not convertible — someone would need to jump in and start pricing goods in it, and in doing so imbue the currency with real-world value — but it’s certainly a lot easier if you can move value back and forth from other currencies.

Currently there are several public Bitcoin markets, including Mt. Gox, the largest, Bitcoin Exchange, which is a forum for person-to-person transactions, and BitcoinExchange.cc, which just strikes me as shady (maybe it’s the .cc TLD).

Even at Mt. Gox, buying Bitcoins is not a straightforward process. You can’t just whip out your Visa and buy $100 worth of Bitcoins at the going rate; instead, you have to go through one of several intermediaries who handle the USD side of the transaction, moving money into a Mt. Gox account, and then you can use the money to buy Bitcoins. It’s not that much worse than setting up an account with a brokerage (and the fees and minimums are much lower!), but it’s not like the Foreign Exchange desk at the airport.

This is where I’m a bit concerned that the whole Bitcoin concept could get in trouble. Right now, the value of Bitcoins — which are backed by nothing, other than a mathematical guarantee that only a certain number can be ‘minted’ — has built into it an assumption about the ease of converting them into USD and other currencies. If the ability to convert Bitcoins to USD or other currencies was suddenly suspended, I suspect you would see a very sharp drop in the value of Bitcoins. In doing so, it might erode confidence enough to render it useless or insignificant as a currency.

Exactly how this plays out will be very interesting in the months and years ahead. The U.S. government took significant amounts of time to bring the axe down on EGold and Liberty Dollars, so the lack of immediate action shouldn’t be taken to indicate any change in attitude towards private currencies. If and when something does happen, my bet is that it occurs at the BTC/USD/EUR/etc. exchange points. We’ll see.

0 Comments, 0 Trackbacks

[/finance] permalink

Mon, 31 Jan 2011

I’ve been hearing about this book literally for years now, and just got around to reading it this month: The Victorian Internet (non-referral link) by Tom Standage. I shouldn’t have waited so long.

Who or where I heard about the book from initially I can’t remember, but I was reminded of it by a mention recently on MetaFilter, had the ‘free sample’ sent to my Kindle, and ended up buying it while waiting in the departure area of IAD last week.

It’s not a long read, but it’s an interesting look at the history of the telegraph, which I thought I had a fairly good understanding of but in truth knew very little about. If you want a companion book to go with it (long flight?), I’d say that Erik Larson’s Thunderstruck is a good choice, although it’s a bit more historical-fictiony, since it essentially picks up a few years after the period that Standage examines in The Victorian Internet. (Thunderstruck deals with the development and impact of radio, mostly during the early spark-gap era.)

Anyway, Standage writes a nice little book and even if it does tend to hit the reader over the head a bit hard with the telegraph-network/Internet comparisons, they’re mostly apt.

Although Standage doesn’t come right out and say this, one of the reasons I suspect that the parallels about workers in the early telegraph industry and the pre-DotBomb tech industry (keep in mind, Standage’s book was written in 1997) work so well was that both involved skills that were so in-demand that employers were willing to overlook a multitude of issues in potential employees, and workplaces developed a colorful culture as a result.

But the real reason to read the book is as food for thought and as a counterpoint to the frequently “chronocentric” (Standage’s term) claims about the unique or unparalleled nature of current technological developments.

About the only negative — and this is expressed in the Amazon reviews — is that the Kindle edition is really pooly done. It’s pretty obviously just some sort of OCR dumped out there for purchase without even the benefit of a single read-through by a human. It’s full of I’s standing in for 1’s, and the drop caps at the beginning of each chapter seem to be a frequent source of problems. It’s certainly readable, but a bit embarrassing on Amazon’s part.

0 Comments, 0 Trackbacks

[/other/books] permalink