Kadin2048's Weblog


Sat, 05 Sep 2015

As promised previously, here is a quick rundown of a procedure that will let you migrate a Mac’s existing bootable hard disk, containing an old version of OS X (particularly versions capable of running Rosetta), into a VMWare Virtual Machine.

This is probably a much better idea than the halfassed virtual dual boot idea I had a few months ago, which had the benefit of allowing bare-metal dual booting into the ‘legacy’ OS version, but also carried with it a certain risk of catastrophic data loss if the disk IDs in your system ever changed.

So, here goes. (The “happy path” procedure is based loosely on these instructions, incidentally.)

  1. Install a modern OS X version on a separate hard drive from the ‘legacy’ (e.g. 10.6.8) install. Or alternately, put the drive with the old installation in a USB chassis and attach it to a newer computer, whichever you prefer. N.B. that this will probably not work with installations from PowerPC machines or pre-EFI Macs.

  2. Use Disk Utility to obtain the disk identifier for the drive containing the ‘legacy’ installation. This is not entirely obvious; you get it from Disk Utility by selecting the partition in the left pane, then clicking on the Info button and looking for “Disk Identifier” in the resulting window.) It’ll be something like disk7s3. Really, we only care about the disk, not the slice.

  3. cd /Applications/VMware\ Fusion.app/Contents/Library/

    This is just to make commands less ugly. You can execute the commands from wherever you want, just use absolute paths.

  4. ./vmware-rawdiskCreator create /dev/disk7 fullDevice /Users/myUser/Desktop/hdd-link lsilogic

    This creates a .vmdk file that is really just a pointer to the attached block device, in my case /dev/disk7. It doesn’t actually copy anything.

    Astute readers might remember this from my misguided attempt to create a dual-boot configuration. In that scenario, I used the resulting VMDK pointer as the basis for a whole virtual machine. Here, we’re not going to do that, because we’ve learned our lesson about where that road leads.

  5. ./vmware-vdiskmanager -r /Users/myUser/Desktop/hdd-link.vmdk -t 0 /Volumes/BigHardDrive/OldImage.vmdk

    This is where the magic happens; this copies the contents of the drive and puts it into a VMDK container file, somewhere else on the filesystem. (In the command above it’s going to an external HDD called “BigHardDrive”. You can put it wherever, but the destination has to have as much space free as the size of the drive being imaged. Not just space in use, but the size of the entire drive.)

    It would be more elegant to create the result as a sparse image, but I wasn’t having any luck getting that to work.

  6. Assuming you have a properly patched (see here for VMWare Fusion 6 patches, and here for Fusion 7+) version of VMWare, you should be able to create a new custom VM and point it to the VMDK file, and it’ll boot.

In my case, though, I got a couple of weird errors:

Failed to convert disk: This function cannot be performed because the handle is executing another function (0x10000900000005).

And also:

Failed to convert disk: Insufficient permission to access file (0x260000000d).

Weird. So, tried to mount it with Disk Utility — maybe the fact that it couldn’t be mounted was a sign of Something Bad with the drive. Yep, it wouldn’t mount. It told me to run a repair cycle against the drive. Which I did.

The not-very-encouraging result: Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files. Which I’d really like to try and do, but I’m not being allowed to for some reason. Ugh.

After a lot of poking around which I’ll spare you the details of, I discovered via top that there was a weird fsck_hfs process running occasionally, taking up a lot of CPU and happening at the same time that I’d see and hear disk thrash to the ‘legacy’ drive. That had to be the problem.

Basically, the system was trying to run fsck against the drive, it was failing and hanging, but in doing so preventing any other processes from accessing the drive. Only by killing the fsck process could I touch the drive’s contents. (And this wasn’t a one-time thing; fsck would periodically restart and have to be killed over and over. It’s a persistent little bastard.) I don’t think this problem has anything to do with the P2V migration, but I’m leaving the problem and solution out here in case anyone else finds it via Google.

Once I killed fsck, I was able to copy the drive to a VMDK, use that VMDK as the basis for a new VM, and it seemed to work acceptably well. Unfortunately I won’t ever be able to boot directly from this virtual machine, unlike the old dual-boot configuration, but it does have the side benefit of not destroying one of my attached hard drives every once in a while.

So it’s got that going for it, which is nice.

0 Comments, 0 Trackbacks

[/technology] permalink

A few months ago I laid out a procedure that allows you to keep an aging Mac OS 10.6.8 install alive, either in a VM running inside a more recent OS X release or dual-booting alongside it, primarily as a way to keep Apple’s Rosetta compatibility layer around so that old PPC software can still be used.

Well, it works. Sort of. It works great right up until it doesn’t, and then it gets really ugly if you’re fond of your data. Oops.

The problem stems, ironically, not from the hacky part where we get around VMWare’s artificial limit on Mac guest OS versions. Nope, that part is seemingly totally safe. The dangerous part is the way we create a VMDK file that references a physical block device in the host system, in order to avoid copying the 10.6.8’s drive into a disk image and allow bare-metal booting back into 10.6.8 if desired.

What can happen, if you physically reconfigure your hard drives — say, by moving some of your old internal HDDs out into USB chassis in preparation for copying them to bigger, newer, internal drives; this is all purely hypothetical by the way (eyeroll) — is that the disk identifier that used to point to the ‘legacy’ Mac OS installation will instead point to some other drive. Some perfectly innocent drive, just out minding its own business, having no idea of the dangerous neighborhood it was thrust into.

So, for example: say when you did the 10.6.8 ‘virtual dual boot’ procedure, that the 10.6.8 disk was /dev/disk2. So the VMDK file that VMWare uses to point to that drive says /dev/disk2. This is all well and good.

But if at some point in the future you muck around with your hard drives, and suddenly the 10.6.8 drive isn’t /dev/disk2 anymore, and instead /disk2 is occupied by (say) a backup hard drive, and then you fire up your VMWare virtual machine… well, VMWare just assumes that /disk2 is the same as it ever was, and the guest OS continues to use it right where it left off.

Specifically, at least if the VM is suspended (rather than shut down) when all this happens, the VM will actually resume cleanly, but then it’ll start getting errors as it starts to read and write from what it thinks is its hard drive, but which is actually some completely different drive. Oh, and as it does this, it’s corrupting the other drive by writing data from its cache down to it.

This is pretty horrifying, from a technical perspective, because there’s lots of ways that you can prevent it. The VMDK file itself actually contains a drive serial number and other data which would be enough for VMWare to realize “hey, that’s not the drive I was using when you hit the pause button!” but it doesn’t seem to be that bright. Instead, it just chews up whatever drive has the misfortune to have the identifier that it thinks it ought to be using.

So, long story short: be extremely careful with the virtual dual boot procedure previously described. At the very least, don’t run it on a computer that contains data you don’t have backups of elsewhere, and you may also want to physically disconnect your backup drives (e.g. Time Machine disks) before playing around with the virtualized guest.

In a separate post I’ll detail a procedure for converting a ‘virtual dual boot’ configuration with a physical drive for the guest OS, into a more traditional VM configuration using a disk image.

Anyway, that’s what you get for taking technical advice from strangers on the Internet. We’re all dogs in here, and not good at computer.

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 30 Dec 2014

Apple, for reasons known only to it, killed Rosetta — its PowerPC compatibility layer — starting with versions of Mac OS after 10.6.8. They also, for reasons that are similarly opaque but seem related to discontinuing Rosetta, make it intentionally difficult to virtualize non-server versions of OS X prior to 10.7.

My read on this is that it’s all part of the Great Apple Upgrade Treadmill, which is their process of intentionally making the entire Apple hardware/software ecosystem obsolete every few years, forcing everyone to buy new stuff. It sucks, and I hate it.

As a way around this, and because I have a fair bit of old hardware hanging around that’s dependent on software that will only run on PowerPC Macs (of which I don’t have any, anymore) or in Rosetta, I needed a way to run OS 10.6.8 in a virtual machine, while allowing my machine’s ‘bare metal’ OS to be upgraded.

It’s a bit of a challenge and not for the faint of heart, although it can be done.

Start state: Mac Pro running OS 10.6.8 (Snow Leopard), which is the last version of Mac OS that has Rosetta installed for compatibility with PowerPC applications. This is a capability we want to preserve.

End state: Mac Pro running OS 10.9 or later on the bare metal, with 10.6.8 running inside a VMWare Fusion container. Machine can also boot up directly into 10.6.8 on the bare metal, for full utilization of the hardware (games, 3D accel., etc.) if required.


  1. The first step is simple: Install OS 10.9 to a separate hard drive, preserving the hard drive that has 10.6.8 on it. When I upgraded, I installed a new hard drive for the new OS, making this pretty easy (in general, if you replace boot drives at the same time as major OS upgrades, you won’t have to deal with failing boot volumes in your primary machine—a small expense for a lot of avoided pain).

  2. Install VMWare Fusion 6.0.3. (That’s the version I used; other versions may also work but you’ll need some different hacks.)

  3. Make sure VMWare Fusion isn’t running, and install/run the “VMWare Unlocker” from InsanelyMac.com. This is required to run a 10.6.8 non-Server guest. This is sort of the key to the process, and it patches your copy of VMWare to remove the asinine checks that Apple apparently mandated that VMWare put in to enforce their obsolesence-suicide-pact EULA.

  4. Don’t update Fusion. If you do, you’ll have to re-install the Unlocker.

  5. Open Fusion, create a new VM. (You can save the files wherever you want; it defaults to ~/Documents/Virtual Machines/ which I think is an obnoxious place to put them, but whatever.) Choose “Mac OS 10.6 Server (64 bit)” as the guest OS type. Or 32-bit, if you’re on a 32-bit machine or trying to boot a 32-bit guest image, although I haven’t tried this. Close the VM and quit Fusion.

  6. Following the instructions here, determine the disk identifier of the hard drive containing 10.6.8. E.g. “disk2” or “disk0” or something similar. Make sure the 10.6.8 volume is unmounted!

  7. From a terminal, run:

    /Applications/VMware\ Fusion.app/Contents/Library/vmware-rawdiskCreator \
    create /dev/disk1 fullDevice ~/external-hdd ide

    You will need to change the disk1 part as needed. Basically, what this does is it creates a VMDK file (external-hdd.vmdk) that points to the specified block device; it doesn’t actually copy any data over. I have it creating the vmdk file in the current user’s home directory, but you can put it wherever.

  8. Locate the virtual machine file (in ~/Documents/Virtual Machines/ or wherever) that you created earlier in Fusion. Right click on it, do ‘Show Package Contents’, and move the external-hdd.vmdk file created with the last command into it.

  9. Using a text editor, modify the .vmx file, also inside the package for the virtual machine, and add the following two lines onto the end:

    ide1:0.present = "TRUE"
    ide1:0.fileName = "external-hdd.vmdk"

    Note that this differs from the “techrem” instructions linked above; its procedure specifies the drive ID as ide1:1 which is bus 1, slave. That causes an error when I tried it in Fusion; it wants the drive to be bus 1, master instead. YMMV.

    Also, if you changed the name of the vmdk file created using vmware-rawdiskCreator to something besides external-hdd, then you need to change the name of the vmdk appropriately.

  10. Now, you should be able to fire up Fusion and boot the VM. It will prompt you to authenticate as an administrator, saying that it needs privileges to access a Boot Camp volume (that is apparently what Fusion thinks the raw device vmdk is).

    The first time I booted, it took a long time to actually start up at a grey screen with the Apple logo, but then it did boot. Be patient. If you get a message saying that the “operating system is not supported and will now shut down”, or something to that effect, then it means the Unlocker modification didn’t take, and you need to retry that step (make sure the unlocker version you’re using supports the version of VMWare you’re trying to patch; they are pretty sensitive to particulars).

    As soon as you get booted up, you will probably want to change the virtual machine’s Machine Name (in System Settings, Sharing), and perhaps also the machine’s IP address if it’s statically configured. I set mine to Bridged networking and let my DHCP server sort it out.

At this point, you have a 10.9 machine, running 10.6.8 in a VM, giving you the ability to run PowerPC applications via Rosetta. It’ll be slower than molasses, but it does work, after a fashion. And because the VM references a physical drive that’s still installed in your computer, you also have the option of booting directly from that disk and running 10.6.8 on the bare metal.

Further Reading:

0 Comments, 0 Trackbacks

[/technology] permalink

Mon, 29 Dec 2014

Although I’ve mostly switched over to Linux on the majority of my computers, I have one remaining Mac OS X machine for stuff like photo/video editing, running Quicken and TurboTax, interfacing with odd bits of hardware (label printers, film scanners, etc.) and other stuff that’s just obnoxiously fiddly on Linux.

The machine runs 10.9.5 and doesn’t typically cause me much trouble. However, in the last week or so I’ve noticed that it keeps waking up from sleep in the middle of the night every few minutes, sometimes for hours at a time, but then sometimes sleeping peacefully for long periods as well.

The culprit, according to the system logs, is apparently a Dymo LabelWriter printer connected via USB.

12/29/14 9:29:52.000 AM kernel[0]: The USB device HubDevice (Port 3 of Hub at 0xfd000000) 
 may have caused a wake by issuing a remote wakeup (2)
12/29/14 9:29:52.000 AM kernel[0]: The USB device HubDevice (Port 4 of Hub at 0xfd300000)
 may have caused a wake by issuing a remote wakeup (3)
12/29/14 9:29:52.000 AM kernel[0]: The USB device DYMO LabelWriter 330 (Port 4 of Hub at 0xfd340000)
 may have caused a wake by issuing a remote wakeup (3)
12/29/14 9:31:28.000 AM kernel[0]: The USB device HubDevice (Port 3 of Hub at 0xfd000000)
 may have caused a wake by issuing a remote wakeup (2)
12/29/14 9:31:28.000 AM kernel[0]: The USB device HubDevice (Port 4 of Hub at 0xfd300000)
 may have caused a wake by issuing a remote wakeup (3)
12/29/14 9:31:28.000 AM kernel[0]: The USB device DYMO LabelWriter 330 (Port 4 of Hub at 0xfd340000)
 may have caused a wake by issuing a remote wakeup (3)
[Repeat several hundred times]

Unfortunately, aside from just unplugging the offending device every night, there doesn’t seem to be a good solution to this problem. Apple’s tech support forums are filled with similar tales of woe, stemming from all sorts of USB devices. There’s no way—at least, not that it would seem—to control which devices are allowed to wake the system and which aren’t.

Even worse, there doesn’t even seem to be a way of disabling USB wake altogether, and just using the front-panel power button to wake the system, which would be a viable if drastic solution. Reaching down to hit the power button isn’t much of a hardship, and is analogous to the way I have most Linux-based laptops set up anyway (wake on power button, not on keyboard/mouse). But Apple thinks they know better and doesn’t allow it.

This, to be honest, just sucks. Apple seems content to blame USB peripheral manufacturers for “not understanding Mac sleep”, as one forum poster put it, rather than just making their systems less oversensitive, or more configurable. Those obscure bits of hardware are the only reason I still have a Mac, so ditching them isn’t much of a solution.

I guess perhaps it’s what I deserve for buying a computer from a consumer-electronics company, but still, disappointing.

0 Comments, 0 Trackbacks

[/technology] permalink

Mon, 14 Jul 2014

An apparently common issue with Outlook for Mac 2011 is crazily high CPU usage, enough to spin up the fans on a desktop machine or drain the battery on a laptop, when Outlook really shouldn’t be doing anything.

If you do some Googling, you’ll find a lot of people complaining and almost as many recommended solutions. Updating to a version after 14.2 is a typical suggestion, as is deleting and rebuilding your mail accounts (ugh, no thanks).

Keeping Outlook up to date isn’t a bad idea, but the problem still persisted with the latest version as of today (14.4.3).

In my case, the high CPU usage had something to do with my Gmail IMAP account, which is accessed from Outlook alongside my Exchange mailbox. Disabling the Gmail account stopped the stupid CPU usage, but that’s not really a solution.

What did work was using the Progress window to see what Outlook was up to whenever the CPU pegged. As it turned out, there was a particular IMAP folder — the ‘Starred’ folder, used by both Gmail and Outlook for starred and flagged messages, respectively — which was being constantly refreshed by Outlook. It would upload all the messages in the folder to Gmail, then quiesce for a second, then do it over again. Over and over.

Outlook’s IMAP implementation is just generally bad, and this seems to happen occasionally without warning. But the Outlook engineers seem to have anticipated it, because if you right-click on an IMAP folder, there’s a helpful option called “Repair Folder”. If you use it on the offending folder, it will replace the contents of the local IMAP store with the server’s version, and break the infinite-refresh cycle.

So, long story short; if you have high-CPU issues with Outlook Mac, try the following:

  1. Update Outlook using the built-in update functionality. See if that fixes the issue.
  2. Use the Progress window to see what Outlook is doing at times when the CPU usage is high. Is it refreshing an IMAP folder?
  3. If so, use the Repair Folder option on that IMAP folder, but be aware that any local changes you’ve made will be lost.

And, of course, lobby your friendly local IT department to use something that sucks less than Exchange.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 08 Sep 2013

After reading through some — certainly not all, and admittedly not thoroughly — of the documents and analysis of the NSA “BULLRUN” crypto-subversion program, as well as various panicky speculation on the usual discussion sites, I can’t resist the temptation to make a few predictions/guesses. At some point in the future I’ll revisit them and we’ll all get to see whether things are actually better or worse than I suspect they are.

I’m not claiming any special knowledge or expertise here; I’m just a dog on the Internet.

Hypothesis 1: NSA hasn’t made any fundamental breakthroughs in cryptanalysis, such as a method of rapidly factoring large numbers, which render public-key cryptography suddenly useless.

None of the leaks seem to suggest any heretofore-unknown abilities that undermine the mathematics that lie at the heart of PK crypto (trapdoor functions). E.g. a giant quantum computer that can simply brute-force arbitrarily large keys in short amounts of time. In fact, the leaks suggest that this capability almost certainly doesn’t exist, or else all the other messy stuff, like compromising various implementations, wouldn’t be necessary.

Hypothesis 2: There are a variety of strategies used by NSA/GHCQ for getting access to encrypted communications, rather that a single technique.

This is a pretty trivial observation. There’s no single “BULLRUN” vulnerability; instead there was an entire program aimed at compromising various products to make them easier to break, and the way this was done varied from case to case. I point this out only because I suspect that it may get glossed over in public discussions of the issue in the future, particularly if there are especially egregious vulnerabilities that were inserted (as seems likely).

Hypothesis 3: Certificate authorities are probably compromised (duh)

This is conjecture on my part, and not drawn directly from any primary source material. But the widely-accepted certificate authorities that form the heart of SSL/TLS PKI are simply too big a target for anyone wanting to monitor communications to ignore. If you have root certs and access to backbone switches with suitably fast equipment, there’s no technical reason why you can’t MITM TLS connections all day long.

However, MITM attacks are still active rather than passive, and probably unfeasible even for the NSA or its contemporaries on a universal basis. Since they’re detectable by a careful-enough user (e.g. someone who actually verifies a certificate fingerprint over a side channel), it’s likely the sort of capability that you keep in reserve for when it counts.

This really shouldn’t be surprising; if anyone seriously thought, pre-Snowden, that Verisign et al wouldn’t and hadn’t handed over the secret keys to their root certs to the NSA, I’d say they were pretty naive.

Hypothesis 4: Offline attacks are facilitated in large part by weak PRNGs

Some documents allude to a program of recording large amounts of encrypted Internet traffic for later decryption and analysis. This rules out conventional MITM attacks, and implies some other method of breaking commonly-used Internet cryptography.

At least one NSA-related weakness seems to have been the Dual_EC_DRBG pseudorandom number generator specified in NIST SP 800-90; it was a bit hamhanded as these things go because it was discovered, but it’s important because it shows an interest.

It is possible that certain “improvements” were made to hardware RNGs, such as those used in VPN hardware and also in many PCs, but the jury seems to be out right now. But compromising hardware makes somewhat more sense than software, since it’s much harder to audit and detect, and it’s also harder to update.

Engineered weaknesses inside [P]RNG hardware used in VPN appliances and other enterprise gear might be the core of NSA’s offline intercept capability, the crown jewel of the whole thing. However, it’s important to keep in mind Hypothesis 2, above.

Hypothesis 5: GCC and other compilers are probably not compromised

It’s possible, both in theory and to some degree in practice, to compromise software by building flaws into the compiler that’s used to create it. (The seminal paper on this topic is “Reflections on Trusting Trust” by Ken Thompson. It’s worth reading.)

Some only-slightly-paranoids have suggested that the NSA and its sister organizations may have attempted to subvert commonly-used compilers in order to weaken all cryptographic software produced with them. I think this is pretty unlikely to have actually been carried out; it just seems like the risk of discovery would be too high. Despite the complexity of something like GCC, there are lots of people looking at it from a variety of organizations, and it would be difficult to subvert all of them while harder still to insert an exploit that would have been completely undetected. In comparison, it would be relatively easy to convince a single company producing ASICs to modify a proprietary design. Just based on bang-for-buck, I think that’s where the effort is likely to have been.

Hypothesis 6: The situation is probably not hopeless, from a security perspective.

There is a refrain in some circles that the situation is now hopeless, and that PK-cryptography-based information security is irretrievably broken and can never be trusted ever again. I do not think that this is the case.

My guess — and this is really a guess; it’s the thing that I’m hoping will be true — is that there’s nothing fundamentally wrong with public key crypto, or even in many carefully-built implementations. It’s when you start optmizing for cost or speed that you open the door.

So: if you are very, very careful, you will still be able to build up a reasonably-secure infrastructure using currently available hardware and software. (‘Reasonably secure’ meaning resistant to untargeted mass surveillance, not necessarily to a targeted attack that might include physical bugging: that’s a much higher bar.) However, some code may need to be changed in order to eliminate any reliance on possibly-compromised components, such as hardware RNGs / accelerators that by their nature are difficult to audit.

Large companies that have significant investments in VPN or TLS-acceleration hardware are probably screwed. Even if the gear is demonstrably flawed, look for most companies to downplay the risk in order to avoid having to suddenly replace it.

Time will tell exactly what techniques are still safe and which aren’t, but my WAG (just for the record, so that there’s something to give a thumbs-up / thumbs-down on later) is that TLS in FIPS-compliance mode, on commodity PC hardware but with hardware RNGs disabled or not present at both ends of the connection, using throwaway certificates (e.g. no use of conventional PKI like certificate authorities) validated via a side-channel, will turn out to be fairly good. But a lot of work will have to be invested in validating everything to be sure.

Also, my overall guess is that neither the open-source world or the commercial, closed-source world will come out entirely unscathed, in terms of reputation for quality. However, the worst vulnerabilities are likely to have been inserted where there were the least number of eyes looking for them, which will probably be in hardware or tightly integrated firmware/software developed by single companies and distributed in “compiled” (literally compiled or in the form of an ASIC) form only.

As usual, simpler will turn out to be better, and generic hardware running widely-available software will be better than dedicated boxes filled with a lot of proprietary silicon or code.

So we’ll see how close I am to the mark. Interesting times.

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 22 Feb 2013

I’ve recently (re)taken up cycling in a fairly major way, and have been surprised by how much I’ve enjoyed it. One of the things that’s making it more fun this time around, as compared to previous dabblings in years past, is the various ways that you can measure and quantify your progress — not to mention your suffering — and compare it with others, etc.

For example, a recent ride taken with a few friends:

Time: 01:54:50
Avg Speed: 13.5 mi/h
Distance: 25.8 mi
Energy Output: 826 kJ
Average Power: 120 W

Now, 120 W is really not especially great from a competitive cycling perspective; better riders routinely output 500-ish watts. But it struck me as being pretty efficient: for all my effort, the ride actually only required the same amount of power to propel me on my way as would have been required by two household light bulbs.

So that got me thinking: just how efficient is cycling?

My 25.8 mi / 41.5 km roundtrip ride required 826 kJ, if we believe Strava; that’s mechanical energy at the pedals. (I unfortunately don’t have a power meter on my bike, so this is a bit of an estimate on Strava’s part, taking into account my weight, my bike’s weight, my speed, the elevation changes on the route, etc.)

That’s about the same as the energy released by 1.7 grams of combusted gasoline, per Wolfram Alpha. If I ran on gasoline, I’d be able to carry enough in my water bottle to ride across the U.S. more than 3 times (7,813 miles worth).

Of course, cars aren’t perfectly efficient in their use of gasoline, and I’m not a perfectly efficient user of food calories. Strava helpfully estimates the food-calorie expenditure of my ride at 921 Calories, which is 3.85 MJ, leading to a somewhat disappointing figure of only 21.4% overall efficiency. (Disappointing only in the engineering sense; from an exercise perspective I’d really rather it be low.)

Though it’s about on par with a car, interestingly enough. The Feds give anywhere between 14-26% as a typical ‘tank-to-tread’ efficiency figure for a passenger car, with most losses in the engine itself.

So if I were able to drink gasoline and use it at least as efficiently as a car, my water bottle would get me about a thousand miles. (1,094 mi or 1,760 km, using the low-end 14% efficiency figure for a car.) Still pretty good, considering that my own car would only get about 5 miles on the same amount of fuel (24 fl oz at 25 MPG).

Of course, a car isn’t an especially fair comparison — it has a lot of overhead both in terms of mass, rolling resistance (more, lower-pressure tires), and air resistance (higher cross-sectional area). Some sort of small motorbike would be a better comparison, and there I suspect you’d start to see an even playing field.

Maybe that’s my argument for getting a motorcycle…

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 26 Sep 2012

I recently had a hardware failure, and decided to take the opportunity to upgrade my aging home server from Ubuntu ‘Dapper Drake’ to Scientific Linux. The reasons for my move away from Ubuntu are an article unto themselves, but it boils down to what I see as an increasing contempt for existing users (and pointless pursuit of hypothetical tablet users — everybody wants to try their hand at being Apple these days, apparently unaware that the role has been filled), combined with — and this is significantly more important — the fact that I have been using RPM-based distros far more often at work than Debian/APT-based ones, despite the many advantages of the latter. Anyway, so I decided to switch the server to SL.

The actual migration process wasn’t pretty and involved a close call with a failing hard drive which I won’t bore you with. The basic process was to preserve the /home partition while tossing everything else. This wasn’t too hard, since SL uses the same Anaconda installer as Fedora and many other distros. I just told it to use my root partition as /, my home partition as /home, etc.

And then I rebooted into my new machine. And seemingly everything broke.

The first hint was on login: I got a helpful message informing me that my home directory (e.g. /home/myusername) didn’t exist. Which was interesting, because once logged in I could easily cd to that directory, which plainly did exist on the filesystem.

The next issue was with ssh: although I could connect via ssh as my normal user, it wasn’t possible to use public key auth, based on the authorized_keys file in my home directory. It was as though the ssh process wasn’t able to access my home directory…

As it turned out, the culprit was SELinux. Because the “source” operating system that I was migrating from didn’t have SELinux enabled, and the “destination” one did, there weren’t proper ‘security contexts’ (extended attributes) on the files stored on /home.

The solution was pretty trivial: I had to run # restorecon -R -v /home (note as root!), which took a few minutes, and then everything worked as expected. This was something I only discovered after much searching, on this forum posting regarding a Fedora 12 install. I’m noting it here in part so that perhaps other people in the future can find it more easily. And because, unfortunately, there are forums filled with people experiencing the same problem and receiving terrible advice that they need to reformat /home (in effect, throw away all their data) in order to upgrade or change distros.

Bottom line: if you are running into weird issues on login (console or SSH) after an upgrade from a non-SELinux distro to a SELinux-enabled one, try rebuilding the security context before taking any drastic steps.

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 01 Aug 2012

Lockheed is apparently working on a next-generation carrier based unmanned fighter aircraft, the “Sea Ghost.” At least, they are “working” on it in the sense that they paid some graphic designer to make some CGI glamour shots of something that might be a UAV, sitting on the deck of what is presumably an aircraft carrier. As press releases go it’s a little underwhelming, but whatever.

From the rendering, it appears that the Sea Ghost is a flying wing design, which is interesting for a number of reasons. Flying wings are almost as old as aviation in general, but have proved — with a few notable exceptions — to be largely impractical, despite having some nice advantages on paper over the traditional fuselage-plus-wings monoplane design. It’s one of those ideas that’s just so good that, despite a sobering list of failures, it just won’t die.

One of the big problems with flying wings is yaw control. Since they lack a tail and traditional rudder, getting the aircraft to rotate on the horizontal plane is difficult. Typically — in the case of the B2, anyway — this is accomplished by careful manipulation of the ailerons to create drag on one wing, while simultaneously compensating on the other side in order to control roll. This is, to put it mildly, a neat trick, and it’s probably the only reason why the B2 exists as a production aircraft (albeit a really expensive one).

I suspect that the Sea Ghost is built the same way, if only because it’s been proven to work and the Lockheed rendering doesn’t show any other vertical stabilizer surfaces that would do the job.

But a thought occured to me: if you can make a drone small and light enough (actually, a small enough moment of inertia), you don’t need to do the B2 aileron trick at all. You could maneuver it like a satellite. That is, by using a gyroscope not simply to sense the aircraft’s change in attitude, but actually to make it move about the gyroscope. Simply: you spin up the gyro, and then use servos to tilt the gimbal that the gyro sits in. The result is a force on the airframe opposite the direction in which the gyro’s axis is moved. With multiple gyros, you could potentially have roll, pitch, and yaw control.

This isn’t practical for most aircraft — aside from helicopters which do it naturally to a degree — because they have too much inertia, and the external forces acting against them are too large; the gyroscope you’d need to provide any sort of useful maneuvering ability would either make the plane too heavy to fly, or take up space needed for other things (e.g. bombs, in the case of most flying wing aircraft). And that might still be the case with the Sea Ghost, but it’s not necessarily the case with all drones.

The smaller, and more importantly lighter, the aircraft the easier it would be to maneuver with a gyroscope rather than external aerodynamic control surfaces. Once you remove the requirement to carry a person, aircraft can be quite small.

It wouldn’t surprise me if you could maneuver a small hobbyist aircraft with a surplus artificial horizon gyro. To my knowledge, nobody has done this yet, but it seems like a pretty straightforward merger of existing technology. You’d need a bunch of additional MEMS gyros, which are lightweight, to sense the change in attitude and stop and start the manuevering gyro’s movement, but there’s nothing that seems like an obvious deal-breaker.

The advantage of such a system would be that there’s no change to the outside skin of the aircraft in order to make it maneuver (within the limits of the force provided by the gyro). That would mean a lower radar cross section, and potentially less complexity and weight due to fewer moving parts in the wings.

Just one of the many intriguing possibilities you come up with, when you take 80 kilos of human meat out of the list of requirements.

Almost enough to get me back into model airplanes again.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 08 Apr 2012

For no particularly good reason, I decided I wanted to play around with IBM VM earlier this weekend. Although this would seem on its face to be fairly difficult — VM/370 is a mainframe OS, after all — thanks to the Hercules emulator, you can get it running on either Windows or Linux fairly easily.

Unfortunately, many of the instructions I found online were either geared towards people having trouble compiling Hercules from source (which I avoided thanks to Ubuntu’s binaries), or assume a lot of pre-existing VM/370 knowledge, or are woefully out of date. So here are just a couple of notes should anyone else be interested in playing around with a fake mainframe…

Some notes about my environment:

  • Dual-core AMD Athlon64 2GHz
  • 1 GB RAM (yes, I know, it needs more memory)
  • Ubuntu 10.04 LTS, aka Lucid

Ubuntu Lucid has a good binary version of Hercules in the repositories. So no compilation is required, at least not for any of the basic features that I was initially interested in. A quick apt-get hercules and apt-get x3270 were the only necessities.

In general, I followed the instructions at gunkies.org: Installing VM/370 on Hercules. However, there were a few differences. The guide is geared towards someone running Hercules on Windows, not Linux.

  • You do not need to set everything up in the same location as the Hercules binaries, as the guide seems to indicate. I created a vm370 directory in my user home, and it worked fine as a place to set up the various archives and DASD files (virtual machine drives).

  • The guide takes you through sequences where you boot the emulated machine, load a ‘tape’, reboot, then load the other ‘tape’. When I did this, the second load didn’t work (indefinite hang until I quit the virtual system from the Hercules console). But after examining the DASD files, it seemed like the second tape had loaded anyway, but the timestamp indicated that it had loaded at the same time as the first tape. I think that they both loaded one after the other in the first boot cycle — hard to tell for sure at this point, but don’t be too concerned if things don’t seem to work as described; I got a working system anyway. Update: The instructions work as described; I had a badly set-up DASD file that was causing an error, which did not show itself until later when I logged in and tried to start CMS.

  • To get a 3270 connection, I had to connect to on port 3270; trying to connect to “localhost” didn’t work. I assume this is just a result of how Hercules is set up to listen, but it caused me to waste some time.

  • The tutorial tells you to start Hercules, then connect your 3270 emulator to the virtual system, then run the ipl command; the expected result is to see the loader on the 3270. For me, this didn’t work… the 3270 display just hung at the Hercules splash screen. To interact with the loader, I had to disconnect and reconnect the 3270 emulator. So, rather than starting Hercules, connecting the 3270, then ipl-ing, it seems easier to start Hercules, ipl, then connect and operate the loader.

Of course, when you get through the whole procedure, what you’ll have is a bare installation of VM/370… without documentation (or extensive previous experience), you can’t do a whole lot. That’s what I’m figuring out now. Perhaps it’ll be the subject of a future entry.

0 Comments, 0 Trackbacks

[/technology/software] permalink