Kadin2048's Weblog


Fri, 24 Jun 2016

Just a quick tip, because I found this information absurdly hard to find online using the search terms I was using. If anyone else out there has a Dell Latitude E6410 laptop, and wants to use it under Linux and achieve the same scrolling behavior as under Windows, using the big center button under the ‘DualPoint Stick’ (Dell’s term for the Touchpoint-ish control in the middle of the keyboard) to scroll, here’s what you need to do:

Create a new file in /usr/share/X11/xorg.conf.d/; I called it 60-wheel-emulation.conf, although the filename isn’t especially important as long as it doesn’t start with a number lower than the other files in the directory.

E.g. you can just do:

$ sudo emacs /usr/share/X11/org.conf.d/60-wheel-emulation.conf

In the file, add the following:

Section "InputClass"
   Identifier "Wheel Emulation"
   MatchProduct "DualPoint Stick"
   Option "EmulateWheel" "on"
   Option "EmulateWheelButton" "2"
   Option "XAxisMapping" "6 7"
   Option "YAxisMapping" "4 5"

This activates a feature called (as you may have figured out) Wheel Emulation, which simulates scroll wheel behavior when a button is pressed and the mouse — or in this case, the pointing stick — is moved. In Windows, this is the default behavior for the Dell DualPoint, but in Linux, the default behavior is for that button to behave as an (absurdly large) traditional middle-click mouse button, which pastes the clipboard.

On a regular mouse, the Linux behavior (paste) is arguably a lot more useful, particularly if you also have an actual scrollwheel. But on the E6410, with the pointing stick, I think that scrolling is a lot more common of an interaction than paste, and I found that I really missed it.

This restores the functionality to what you may be used to.

Further information can be found at this Unix Stackexchange question which is where I got the original tip. Note that you can’t just copy and paste from that page and have it work on a Dell; the product name is wrong. You can determine the product name as described there, using the xinput --list command, however, if you have another model or brand of laptop.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Sun, 20 Mar 2016

Or, “So You Bought This Thing on eBay, Now What?”

TL;DR version: The important bits

If you need to do a factory reset on a Linksys SRW2024P, you will need a DB9 female-to-female straight through cable, not a null modem cable or a typical Cisco cable or anything else. Connect at 38400 8N1, turn the switch on while holding down Ctrl-U in the terminal, at the firmware menu select “D” for delete, and then delete the “startup-config” file. Reboot the switch and it should be reset to and username ‘admin’ without a password.

The whole story

By way of background, I’d been looking for a new ‘core switch’ for my home network for a while, to replace the grown-not-designed arrangement of crummy 5-port desktop switches that had been slowly proliferating throughout the house. A while back in a fit of DIY hubris, I managed to run a lot of Cat 5e cabling through the walls of the house, end-running them to allow me to get a single big switch.

While Ethernet switches are not exactly expensive these days, I had a couple of requirements: I wanted Gigabit, and not just on a couple of uplink ports, but on every port, and I also wanted Power Over Ethernet, so that I could drive IP phones, cameras, wireless APs, and other gadgets in the future without individual power supplies.

While Ethernet switches in general are cheap, GigE + PoE switches are not. You can easily drop several hundred dollars on a new one, and you have to get fairly high up into ‘business class’ territory to find the right mix of features (which is fine, since consumer networking equipment is largely garbage). However, after trolling through eBay for a few days, I noticed an exception: the Linksys SRW2024P. For reasons that aren’t immediately clear, there were a fair number of these things on the ‘bay for around $100, which is a pretty great deal for a managed GigE switch, even before the POE feature.

So, of course, I bought one. And then the fun began.

First, a few notes about the SRW2024P, in case you are thinking about buying one: first, they are loud. One discussion thread described them as “datacenter loud”, and that’s probably fair. You do not want to have one of them in your bedroom or home-office, even in a closet. You might not even want it on the same floor as your bedroom or office, depending on how big your house is. It has a couple of very high-RPM fans that are just obnoxious. Second, the switch you buy will almost certainly not be factory-reset. At least, mine wasn’t, and most people asking questions on support forums don’t seem to have gotten them that way, either.

I think the current crop of eBay specials must be corporate datacenter pulls, and whoever previously owned them was smart enough not to leave them in their default configuration. Good for them, annoying for the next person.

The SRW2024P doesn’t have an easily-accessible reset button. In fact, as far as I can tell, it doesn’t have a reset button anywhere. To do a factory reset, you have to log into the switch via the serial port and wipe the settings file from the firmware. And this is where things get really fun, and brings me to the whole point of this post.

Linksys, aka Cisco, in their infinite wisdom and/or greed, decided against putting a regular serial-console port on the SRW2024P. The port that’s on the front of the unit is a RS232 port, but with the pins arranged in such a way that virtually no widely-available serial cable will work.

There is a lot of misinformation floating around concerning the SRW2024P’s serial port. In particular, there are many suggestions online that you need to use an RS232 null modem cable to connect it to a computer. This is incorrect, and a null modem cable will not work.

What you need is a DB9 female-to-female, straight through cable. Which is not a null modem cable. A null modem cable has the Transmit and Receive Data pins crossed, so that “transmit” at one end of the cable arrives on the “receive” pin at the other; this allows two computers (“DTEs” or “data terminal equipment” in RS232 lingo) to communicate without a pair of modems (“DCEs”, “data communication equipment”, in the middle). Hence the ‘null modem’ name.

Typically, devices with DB9 male ports are DTEs, and female DB9 or DB25 ports are DCEs. If you still have a box of 1990s junk around somewhere, feel free to look at an actual modem. RS232 straight-through cables are typically Male to Female, while null modem cables are typically Female to Female. This is by convention, not Galactic Law or anything, but it’s widely followed.

Except by the SRW2024P. It won’t work with a null modem cable, despite the male DB9 port on the front leading you to (reasonably) think that it would. I tried a number of null modem cables, including the programming cables used by a variety of other switches and routers. None of them worked.

Basically, the SRW2024P has the TXD and RXD pins already swapped inside its DB9 Male connector on the front. This is stupid, because it means you can’t use either a standard null modem cable or a standard straight through cable, because the genders don’t match, but that’s what Linksys did. When the switches were sold new they reportedly came with a special cable, but good luck finding one now.

I wasn’t able to easily find any DB9 F-F straight-through cables, locally or for a reasonable price online; they just aren’t something that get used very often. The cheapest and easiest route was just to create one. To do the job, I just used a couple of these DB9 screw terminal breakout boards, but you could also use a couple of DB9-to-8P8C adapters and a piece of straight-through Cat5. Whatever works. But the important part is that the TXD at one end is wired to TXD at the other, RXD to RXD, and Ground to Ground. None of the other pins seem to matter, since there’s no flow control.

Anyway, once you get the correct cable, the reset process is pretty straightforward:

  1. Connect up the cable

  2. Configure your terminal for 38400 baud, 8 data bits, 1 stop bit, no parity. (Aka “38400 8N1”) I used Minicom on Linux, but you could use any terminal emulator; nothing fancy.

  3. Turn the switch on, while watching the terminal. You should at least see some boot messages. It’s possible for the switch to be set to serial settings other than 38400 8N1, but it seems as though the firmware is always set that way. So if the switch is working and the serial connection is correct, you should see something.

  4. “Try before you pry”, as the fire service saying goes. Before screwing around with the factory reset, it’s worth giving the default login password a try. (It’s ‘admin’ as the user with no password.) In my case this didn’t work, but it’s always worth a shot.

  5. Power cycle the switch. As you turn it back on, press Ctrl-U on the terminal. Within the first second or so of boot, this should drop you into a firmware menu. Pressing ‘D’ for Delete will show you a list of files in the switch’s firmware.

  6. Delete the ‘startup-config’ file but nothing else. Power cycle the switch again. Don’t be alarmed if it takes a while to boot back up. (I had to power cycle it twice; the first time I don’t think I left it unplugged long enough. Give it 10s or so.)

What you should end up with is what you probably wanted in the first place: a switch in factory-fresh condition. From there, you can either continue configuring it via the serial connection, or use the web interface. Beware that the web interface seems to perform poorly on anything except for IE, though.

References and Anti-Insomnia Treatments:

  • How to reset the Switch??? - one of the only useful threads I found on Linksys’ support forums.

  • Real console on Linksys 2024P - I haven’t tried this procedure yet, but it’s allegedly a way of getting a ‘power user’ console on the switch via Telnet, once you’ve factory reset it.

  • Linksys SRW models password reset - One of the few articles that correctly identified the necessary straight-through cable, but it tells you to press Esc on boot to access the firmware reset menu; on mine, I had to press Ctrl-U as documented elsewhere. Perhaps other SRW models use Esc?

0 Comments, 0 Trackbacks

[/technology] permalink

Sat, 05 Sep 2015

As promised previously, here is a quick rundown of a procedure that will let you migrate a Mac’s existing bootable hard disk, containing an old version of OS X (particularly versions capable of running Rosetta), into a VMWare Virtual Machine.

This is probably a much better idea than the halfassed virtual dual boot idea I had a few months ago, which had the benefit of allowing bare-metal dual booting into the ‘legacy’ OS version, but also carried with it a certain risk of catastrophic data loss if the disk IDs in your system ever changed.

So, here goes. (The “happy path” procedure is based loosely on these instructions, incidentally.)

  1. Install a modern OS X version on a separate hard drive from the ‘legacy’ (e.g. 10.6.8) install. Or alternately, put the drive with the old installation in a USB chassis and attach it to a newer computer, whichever you prefer. N.B. that this will probably not work with installations from PowerPC machines or pre-EFI Macs.

  2. Use Disk Utility to obtain the disk identifier for the drive containing the ‘legacy’ installation. This is not entirely obvious; you get it from Disk Utility by selecting the partition in the left pane, then clicking on the Info button and looking for “Disk Identifier” in the resulting window.) It’ll be something like disk7s3. Really, we only care about the disk, not the slice.

  3. cd /Applications/VMware\ Fusion.app/Contents/Library/

    This is just to make commands less ugly. You can execute the commands from wherever you want, just use absolute paths.

  4. ./vmware-rawdiskCreator create /dev/disk7 fullDevice /Users/myUser/Desktop/hdd-link lsilogic

    This creates a .vmdk file that is really just a pointer to the attached block device, in my case /dev/disk7. It doesn’t actually copy anything.

    Astute readers might remember this from my misguided attempt to create a dual-boot configuration. In that scenario, I used the resulting VMDK pointer as the basis for a whole virtual machine. Here, we’re not going to do that, because we’ve learned our lesson about where that road leads.

  5. ./vmware-vdiskmanager -r /Users/myUser/Desktop/hdd-link.vmdk -t 0 /Volumes/BigHardDrive/OldImage.vmdk

    This is where the magic happens; this copies the contents of the drive and puts it into a VMDK container file, somewhere else on the filesystem. (In the command above it’s going to an external HDD called “BigHardDrive”. You can put it wherever, but the destination has to have as much space free as the size of the drive being imaged. Not just space in use, but the size of the entire drive.)

    It would be more elegant to create the result as a sparse image, but I wasn’t having any luck getting that to work.

  6. Assuming you have a properly patched (see here for VMWare Fusion 6 patches, and here for Fusion 7+) version of VMWare, you should be able to create a new custom VM and point it to the VMDK file, and it’ll boot.

In my case, though, I got a couple of weird errors:

Failed to convert disk: This function cannot be performed because the handle is executing another function (0x10000900000005).

And also:

Failed to convert disk: Insufficient permission to access file (0x260000000d).

Weird. So, tried to mount it with Disk Utility — maybe the fact that it couldn’t be mounted was a sign of Something Bad with the drive. Yep, it wouldn’t mount. It told me to run a repair cycle against the drive. Which I did.

The not-very-encouraging result: Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files. Which I’d really like to try and do, but I’m not being allowed to for some reason. Ugh.

After a lot of poking around which I’ll spare you the details of, I discovered via top that there was a weird fsck_hfs process running occasionally, taking up a lot of CPU and happening at the same time that I’d see and hear disk thrash to the ‘legacy’ drive. That had to be the problem.

Basically, the system was trying to run fsck against the drive, it was failing and hanging, but in doing so preventing any other processes from accessing the drive. Only by killing the fsck process could I touch the drive’s contents. (And this wasn’t a one-time thing; fsck would periodically restart and have to be killed over and over. It’s a persistent little bastard.) I don’t think this problem has anything to do with the P2V migration, but I’m leaving the problem and solution out here in case anyone else finds it via Google.

Once I killed fsck, I was able to copy the drive to a VMDK, use that VMDK as the basis for a new VM, and it seemed to work acceptably well. Unfortunately I won’t ever be able to boot directly from this virtual machine, unlike the old dual-boot configuration, but it does have the side benefit of not destroying one of my attached hard drives every once in a while.

So it’s got that going for it, which is nice.

0 Comments, 0 Trackbacks

[/technology] permalink

A few months ago I laid out a procedure that allows you to keep an aging Mac OS 10.6.8 install alive, either in a VM running inside a more recent OS X release or dual-booting alongside it, primarily as a way to keep Apple’s Rosetta compatibility layer around so that old PPC software can still be used.

Well, it works. Sort of. It works great right up until it doesn’t, and then it gets really ugly if you’re fond of your data. Oops.

The problem stems, ironically, not from the hacky part where we get around VMWare’s artificial limit on Mac guest OS versions. Nope, that part is seemingly totally safe. The dangerous part is the way we create a VMDK file that references a physical block device in the host system, in order to avoid copying the 10.6.8’s drive into a disk image and allow bare-metal booting back into 10.6.8 if desired.

What can happen, if you physically reconfigure your hard drives — say, by moving some of your old internal HDDs out into USB chassis in preparation for copying them to bigger, newer, internal drives; this is all purely hypothetical by the way (eyeroll) — is that the disk identifier that used to point to the ‘legacy’ Mac OS installation will instead point to some other drive. Some perfectly innocent drive, just out minding its own business, having no idea of the dangerous neighborhood it was thrust into.

So, for example: say when you did the 10.6.8 ‘virtual dual boot’ procedure, that the 10.6.8 disk was /dev/disk2. So the VMDK file that VMWare uses to point to that drive says /dev/disk2. This is all well and good.

But if at some point in the future you muck around with your hard drives, and suddenly the 10.6.8 drive isn’t /dev/disk2 anymore, and instead /disk2 is occupied by (say) a backup hard drive, and then you fire up your VMWare virtual machine… well, VMWare just assumes that /disk2 is the same as it ever was, and the guest OS continues to use it right where it left off.

Specifically, at least if the VM is suspended (rather than shut down) when all this happens, the VM will actually resume cleanly, but then it’ll start getting errors as it starts to read and write from what it thinks is its hard drive, but which is actually some completely different drive. Oh, and as it does this, it’s corrupting the other drive by writing data from its cache down to it.

This is pretty horrifying, from a technical perspective, because there’s lots of ways that you can prevent it. The VMDK file itself actually contains a drive serial number and other data which would be enough for VMWare to realize “hey, that’s not the drive I was using when you hit the pause button!” but it doesn’t seem to be that bright. Instead, it just chews up whatever drive has the misfortune to have the identifier that it thinks it ought to be using.

So, long story short: be extremely careful with the virtual dual boot procedure previously described. At the very least, don’t run it on a computer that contains data you don’t have backups of elsewhere, and you may also want to physically disconnect your backup drives (e.g. Time Machine disks) before playing around with the virtualized guest.

In a separate post I’ll detail a procedure for converting a ‘virtual dual boot’ configuration with a physical drive for the guest OS, into a more traditional VM configuration using a disk image.

Anyway, that’s what you get for taking technical advice from strangers on the Internet. We’re all dogs in here, and not good at computer.

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 30 Dec 2014

Apple, for reasons known only to it, killed Rosetta — its PowerPC compatibility layer — starting with versions of Mac OS after 10.6.8. They also, for reasons that are similarly opaque but seem related to discontinuing Rosetta, make it intentionally difficult to virtualize non-server versions of OS X prior to 10.7.

My read on this is that it’s all part of the Great Apple Upgrade Treadmill, which is their process of intentionally making the entire Apple hardware/software ecosystem obsolete every few years, forcing everyone to buy new stuff. It sucks, and I hate it.

As a way around this, and because I have a fair bit of old hardware hanging around that’s dependent on software that will only run on PowerPC Macs (of which I don’t have any, anymore) or in Rosetta, I needed a way to run OS 10.6.8 in a virtual machine, while allowing my machine’s ‘bare metal’ OS to be upgraded.

It’s a bit of a challenge and not for the faint of heart, although it can be done.

Start state: Mac Pro running OS 10.6.8 (Snow Leopard), which is the last version of Mac OS that has Rosetta installed for compatibility with PowerPC applications. This is a capability we want to preserve.

End state: Mac Pro running OS 10.9 or later on the bare metal, with 10.6.8 running inside a VMWare Fusion container. Machine can also boot up directly into 10.6.8 on the bare metal, for full utilization of the hardware (games, 3D accel., etc.) if required.


  1. The first step is simple: Install OS 10.9 to a separate hard drive, preserving the hard drive that has 10.6.8 on it. When I upgraded, I installed a new hard drive for the new OS, making this pretty easy (in general, if you replace boot drives at the same time as major OS upgrades, you won’t have to deal with failing boot volumes in your primary machine—a small expense for a lot of avoided pain).

  2. Install VMWare Fusion 6.0.3. (That’s the version I used; other versions may also work but you’ll need some different hacks.)

  3. Make sure VMWare Fusion isn’t running, and install/run the “VMWare Unlocker” from InsanelyMac.com. This is required to run a 10.6.8 non-Server guest. This is sort of the key to the process, and it patches your copy of VMWare to remove the asinine checks that Apple apparently mandated that VMWare put in to enforce their obsolesence-suicide-pact EULA.

  4. Don’t update Fusion. If you do, you’ll have to re-install the Unlocker.

  5. Open Fusion, create a new VM. (You can save the files wherever you want; it defaults to ~/Documents/Virtual Machines/ which I think is an obnoxious place to put them, but whatever.) Choose “Mac OS 10.6 Server (64 bit)” as the guest OS type. Or 32-bit, if you’re on a 32-bit machine or trying to boot a 32-bit guest image, although I haven’t tried this. Close the VM and quit Fusion.

  6. Following the instructions here, determine the disk identifier of the hard drive containing 10.6.8. E.g. “disk2” or “disk0” or something similar. Make sure the 10.6.8 volume is unmounted!

  7. From a terminal, run:

    /Applications/VMware\ Fusion.app/Contents/Library/vmware-rawdiskCreator \
    create /dev/disk1 fullDevice ~/external-hdd ide

    You will need to change the disk1 part as needed. Basically, what this does is it creates a VMDK file (external-hdd.vmdk) that points to the specified block device; it doesn’t actually copy any data over. I have it creating the vmdk file in the current user’s home directory, but you can put it wherever.

  8. Locate the virtual machine file (in ~/Documents/Virtual Machines/ or wherever) that you created earlier in Fusion. Right click on it, do ‘Show Package Contents’, and move the external-hdd.vmdk file created with the last command into it.

  9. Using a text editor, modify the .vmx file, also inside the package for the virtual machine, and add the following two lines onto the end:

    ide1:0.present = "TRUE"
    ide1:0.fileName = "external-hdd.vmdk"

    Note that this differs from the “techrem” instructions linked above; its procedure specifies the drive ID as ide1:1 which is bus 1, slave. That causes an error when I tried it in Fusion; it wants the drive to be bus 1, master instead. YMMV.

    Also, if you changed the name of the vmdk file created using vmware-rawdiskCreator to something besides external-hdd, then you need to change the name of the vmdk appropriately.

  10. Now, you should be able to fire up Fusion and boot the VM. It will prompt you to authenticate as an administrator, saying that it needs privileges to access a Boot Camp volume (that is apparently what Fusion thinks the raw device vmdk is).

    The first time I booted, it took a long time to actually start up at a grey screen with the Apple logo, but then it did boot. Be patient. If you get a message saying that the “operating system is not supported and will now shut down”, or something to that effect, then it means the Unlocker modification didn’t take, and you need to retry that step (make sure the unlocker version you’re using supports the version of VMWare you’re trying to patch; they are pretty sensitive to particulars).

    As soon as you get booted up, you will probably want to change the virtual machine’s Machine Name (in System Settings, Sharing), and perhaps also the machine’s IP address if it’s statically configured. I set mine to Bridged networking and let my DHCP server sort it out.

At this point, you have a 10.9 machine, running 10.6.8 in a VM, giving you the ability to run PowerPC applications via Rosetta. It’ll be slower than molasses, but it does work, after a fashion. And because the VM references a physical drive that’s still installed in your computer, you also have the option of booting directly from that disk and running 10.6.8 on the bare metal.

Further Reading:

0 Comments, 0 Trackbacks

[/technology] permalink

Mon, 29 Dec 2014

Although I’ve mostly switched over to Linux on the majority of my computers, I have one remaining Mac OS X machine for stuff like photo/video editing, running Quicken and TurboTax, interfacing with odd bits of hardware (label printers, film scanners, etc.) and other stuff that’s just obnoxiously fiddly on Linux.

The machine runs 10.9.5 and doesn’t typically cause me much trouble. However, in the last week or so I’ve noticed that it keeps waking up from sleep in the middle of the night every few minutes, sometimes for hours at a time, but then sometimes sleeping peacefully for long periods as well.

The culprit, according to the system logs, is apparently a Dymo LabelWriter printer connected via USB.

12/29/14 9:29:52.000 AM kernel[0]: The USB device HubDevice (Port 3 of Hub at 0xfd000000) 
 may have caused a wake by issuing a remote wakeup (2)
12/29/14 9:29:52.000 AM kernel[0]: The USB device HubDevice (Port 4 of Hub at 0xfd300000)
 may have caused a wake by issuing a remote wakeup (3)
12/29/14 9:29:52.000 AM kernel[0]: The USB device DYMO LabelWriter 330 (Port 4 of Hub at 0xfd340000)
 may have caused a wake by issuing a remote wakeup (3)
12/29/14 9:31:28.000 AM kernel[0]: The USB device HubDevice (Port 3 of Hub at 0xfd000000)
 may have caused a wake by issuing a remote wakeup (2)
12/29/14 9:31:28.000 AM kernel[0]: The USB device HubDevice (Port 4 of Hub at 0xfd300000)
 may have caused a wake by issuing a remote wakeup (3)
12/29/14 9:31:28.000 AM kernel[0]: The USB device DYMO LabelWriter 330 (Port 4 of Hub at 0xfd340000)
 may have caused a wake by issuing a remote wakeup (3)
[Repeat several hundred times]

Unfortunately, aside from just unplugging the offending device every night, there doesn’t seem to be a good solution to this problem. Apple’s tech support forums are filled with similar tales of woe, stemming from all sorts of USB devices. There’s no way—at least, not that it would seem—to control which devices are allowed to wake the system and which aren’t.

Even worse, there doesn’t even seem to be a way of disabling USB wake altogether, and just using the front-panel power button to wake the system, which would be a viable if drastic solution. Reaching down to hit the power button isn’t much of a hardship, and is analogous to the way I have most Linux-based laptops set up anyway (wake on power button, not on keyboard/mouse). But Apple thinks they know better and doesn’t allow it.

This, to be honest, just sucks. Apple seems content to blame USB peripheral manufacturers for “not understanding Mac sleep”, as one forum poster put it, rather than just making their systems less oversensitive, or more configurable. Those obscure bits of hardware are the only reason I still have a Mac, so ditching them isn’t much of a solution.

I guess perhaps it’s what I deserve for buying a computer from a consumer-electronics company, but still, disappointing.

0 Comments, 0 Trackbacks

[/technology] permalink

Mon, 14 Jul 2014

An apparently common issue with Outlook for Mac 2011 is crazily high CPU usage, enough to spin up the fans on a desktop machine or drain the battery on a laptop, when Outlook really shouldn’t be doing anything.

If you do some Googling, you’ll find a lot of people complaining and almost as many recommended solutions. Updating to a version after 14.2 is a typical suggestion, as is deleting and rebuilding your mail accounts (ugh, no thanks).

Keeping Outlook up to date isn’t a bad idea, but the problem still persisted with the latest version as of today (14.4.3).

In my case, the high CPU usage had something to do with my Gmail IMAP account, which is accessed from Outlook alongside my Exchange mailbox. Disabling the Gmail account stopped the stupid CPU usage, but that’s not really a solution.

What did work was using the Progress window to see what Outlook was up to whenever the CPU pegged. As it turned out, there was a particular IMAP folder — the ‘Starred’ folder, used by both Gmail and Outlook for starred and flagged messages, respectively — which was being constantly refreshed by Outlook. It would upload all the messages in the folder to Gmail, then quiesce for a second, then do it over again. Over and over.

Outlook’s IMAP implementation is just generally bad, and this seems to happen occasionally without warning. But the Outlook engineers seem to have anticipated it, because if you right-click on an IMAP folder, there’s a helpful option called “Repair Folder”. If you use it on the offending folder, it will replace the contents of the local IMAP store with the server’s version, and break the infinite-refresh cycle.

So, long story short; if you have high-CPU issues with Outlook Mac, try the following:

  1. Update Outlook using the built-in update functionality. See if that fixes the issue.
  2. Use the Progress window to see what Outlook is doing at times when the CPU usage is high. Is it refreshing an IMAP folder?
  3. If so, use the Repair Folder option on that IMAP folder, but be aware that any local changes you’ve made will be lost.

And, of course, lobby your friendly local IT department to use something that sucks less than Exchange.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 08 Sep 2013

After reading through some — certainly not all, and admittedly not thoroughly — of the documents and analysis of the NSA “BULLRUN” crypto-subversion program, as well as various panicky speculation on the usual discussion sites, I can’t resist the temptation to make a few predictions/guesses. At some point in the future I’ll revisit them and we’ll all get to see whether things are actually better or worse than I suspect they are.

I’m not claiming any special knowledge or expertise here; I’m just a dog on the Internet.

Hypothesis 1: NSA hasn’t made any fundamental breakthroughs in cryptanalysis, such as a method of rapidly factoring large numbers, which render public-key cryptography suddenly useless.

None of the leaks seem to suggest any heretofore-unknown abilities that undermine the mathematics that lie at the heart of PK crypto (trapdoor functions). E.g. a giant quantum computer that can simply brute-force arbitrarily large keys in short amounts of time. In fact, the leaks suggest that this capability almost certainly doesn’t exist, or else all the other messy stuff, like compromising various implementations, wouldn’t be necessary.

Hypothesis 2: There are a variety of strategies used by NSA/GHCQ for getting access to encrypted communications, rather that a single technique.

This is a pretty trivial observation. There’s no single “BULLRUN” vulnerability; instead there was an entire program aimed at compromising various products to make them easier to break, and the way this was done varied from case to case. I point this out only because I suspect that it may get glossed over in public discussions of the issue in the future, particularly if there are especially egregious vulnerabilities that were inserted (as seems likely).

Hypothesis 3: Certificate authorities are probably compromised (duh)

This is conjecture on my part, and not drawn directly from any primary source material. But the widely-accepted certificate authorities that form the heart of SSL/TLS PKI are simply too big a target for anyone wanting to monitor communications to ignore. If you have root certs and access to backbone switches with suitably fast equipment, there’s no technical reason why you can’t MITM TLS connections all day long.

However, MITM attacks are still active rather than passive, and probably unfeasible even for the NSA or its contemporaries on a universal basis. Since they’re detectable by a careful-enough user (e.g. someone who actually verifies a certificate fingerprint over a side channel), it’s likely the sort of capability that you keep in reserve for when it counts.

This really shouldn’t be surprising; if anyone seriously thought, pre-Snowden, that Verisign et al wouldn’t and hadn’t handed over the secret keys to their root certs to the NSA, I’d say they were pretty naive.

Hypothesis 4: Offline attacks are facilitated in large part by weak PRNGs

Some documents allude to a program of recording large amounts of encrypted Internet traffic for later decryption and analysis. This rules out conventional MITM attacks, and implies some other method of breaking commonly-used Internet cryptography.

At least one NSA-related weakness seems to have been the Dual_EC_DRBG pseudorandom number generator specified in NIST SP 800-90; it was a bit hamhanded as these things go because it was discovered, but it’s important because it shows an interest.

It is possible that certain “improvements” were made to hardware RNGs, such as those used in VPN hardware and also in many PCs, but the jury seems to be out right now. But compromising hardware makes somewhat more sense than software, since it’s much harder to audit and detect, and it’s also harder to update.

Engineered weaknesses inside [P]RNG hardware used in VPN appliances and other enterprise gear might be the core of NSA’s offline intercept capability, the crown jewel of the whole thing. However, it’s important to keep in mind Hypothesis 2, above.

Hypothesis 5: GCC and other compilers are probably not compromised

It’s possible, both in theory and to some degree in practice, to compromise software by building flaws into the compiler that’s used to create it. (The seminal paper on this topic is “Reflections on Trusting Trust” by Ken Thompson. It’s worth reading.)

Some only-slightly-paranoids have suggested that the NSA and its sister organizations may have attempted to subvert commonly-used compilers in order to weaken all cryptographic software produced with them. I think this is pretty unlikely to have actually been carried out; it just seems like the risk of discovery would be too high. Despite the complexity of something like GCC, there are lots of people looking at it from a variety of organizations, and it would be difficult to subvert all of them while harder still to insert an exploit that would have been completely undetected. In comparison, it would be relatively easy to convince a single company producing ASICs to modify a proprietary design. Just based on bang-for-buck, I think that’s where the effort is likely to have been.

Hypothesis 6: The situation is probably not hopeless, from a security perspective.

There is a refrain in some circles that the situation is now hopeless, and that PK-cryptography-based information security is irretrievably broken and can never be trusted ever again. I do not think that this is the case.

My guess — and this is really a guess; it’s the thing that I’m hoping will be true — is that there’s nothing fundamentally wrong with public key crypto, or even in many carefully-built implementations. It’s when you start optmizing for cost or speed that you open the door.

So: if you are very, very careful, you will still be able to build up a reasonably-secure infrastructure using currently available hardware and software. (‘Reasonably secure’ meaning resistant to untargeted mass surveillance, not necessarily to a targeted attack that might include physical bugging: that’s a much higher bar.) However, some code may need to be changed in order to eliminate any reliance on possibly-compromised components, such as hardware RNGs / accelerators that by their nature are difficult to audit.

Large companies that have significant investments in VPN or TLS-acceleration hardware are probably screwed. Even if the gear is demonstrably flawed, look for most companies to downplay the risk in order to avoid having to suddenly replace it.

Time will tell exactly what techniques are still safe and which aren’t, but my WAG (just for the record, so that there’s something to give a thumbs-up / thumbs-down on later) is that TLS in FIPS-compliance mode, on commodity PC hardware but with hardware RNGs disabled or not present at both ends of the connection, using throwaway certificates (e.g. no use of conventional PKI like certificate authorities) validated via a side-channel, will turn out to be fairly good. But a lot of work will have to be invested in validating everything to be sure.

Also, my overall guess is that neither the open-source world or the commercial, closed-source world will come out entirely unscathed, in terms of reputation for quality. However, the worst vulnerabilities are likely to have been inserted where there were the least number of eyes looking for them, which will probably be in hardware or tightly integrated firmware/software developed by single companies and distributed in “compiled” (literally compiled or in the form of an ASIC) form only.

As usual, simpler will turn out to be better, and generic hardware running widely-available software will be better than dedicated boxes filled with a lot of proprietary silicon or code.

So we’ll see how close I am to the mark. Interesting times.

0 Comments, 0 Trackbacks

[/technology] permalink

Fri, 22 Feb 2013

I’ve recently (re)taken up cycling in a fairly major way, and have been surprised by how much I’ve enjoyed it. One of the things that’s making it more fun this time around, as compared to previous dabblings in years past, is the various ways that you can measure and quantify your progress — not to mention your suffering — and compare it with others, etc.

For example, a recent ride taken with a few friends:

Time: 01:54:50
Avg Speed: 13.5 mi/h
Distance: 25.8 mi
Energy Output: 826 kJ
Average Power: 120 W

Now, 120 W is really not especially great from a competitive cycling perspective; better riders routinely output 500-ish watts. But it struck me as being pretty efficient: for all my effort, the ride actually only required the same amount of power to propel me on my way as would have been required by two household light bulbs.

So that got me thinking: just how efficient is cycling?

My 25.8 mi / 41.5 km roundtrip ride required 826 kJ, if we believe Strava; that’s mechanical energy at the pedals. (I unfortunately don’t have a power meter on my bike, so this is a bit of an estimate on Strava’s part, taking into account my weight, my bike’s weight, my speed, the elevation changes on the route, etc.)

That’s about the same as the energy released by 1.7 grams of combusted gasoline, per Wolfram Alpha. If I ran on gasoline, I’d be able to carry enough in my water bottle to ride across the U.S. more than 3 times (7,813 miles worth).

Of course, cars aren’t perfectly efficient in their use of gasoline, and I’m not a perfectly efficient user of food calories. Strava helpfully estimates the food-calorie expenditure of my ride at 921 Calories, which is 3.85 MJ, leading to a somewhat disappointing figure of only 21.4% overall efficiency. (Disappointing only in the engineering sense; from an exercise perspective I’d really rather it be low.)

Though it’s about on par with a car, interestingly enough. The Feds give anywhere between 14-26% as a typical ‘tank-to-tread’ efficiency figure for a passenger car, with most losses in the engine itself.

So if I were able to drink gasoline and use it at least as efficiently as a car, my water bottle would get me about a thousand miles. (1,094 mi or 1,760 km, using the low-end 14% efficiency figure for a car.) Still pretty good, considering that my own car would only get about 5 miles on the same amount of fuel (24 fl oz at 25 MPG).

Of course, a car isn’t an especially fair comparison — it has a lot of overhead both in terms of mass, rolling resistance (more, lower-pressure tires), and air resistance (higher cross-sectional area). Some sort of small motorbike would be a better comparison, and there I suspect you’d start to see an even playing field.

Maybe that’s my argument for getting a motorcycle…

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 26 Sep 2012

I recently had a hardware failure, and decided to take the opportunity to upgrade my aging home server from Ubuntu ‘Dapper Drake’ to Scientific Linux. The reasons for my move away from Ubuntu are an article unto themselves, but it boils down to what I see as an increasing contempt for existing users (and pointless pursuit of hypothetical tablet users — everybody wants to try their hand at being Apple these days, apparently unaware that the role has been filled), combined with — and this is significantly more important — the fact that I have been using RPM-based distros far more often at work than Debian/APT-based ones, despite the many advantages of the latter. Anyway, so I decided to switch the server to SL.

The actual migration process wasn’t pretty and involved a close call with a failing hard drive which I won’t bore you with. The basic process was to preserve the /home partition while tossing everything else. This wasn’t too hard, since SL uses the same Anaconda installer as Fedora and many other distros. I just told it to use my root partition as /, my home partition as /home, etc.

And then I rebooted into my new machine. And seemingly everything broke.

The first hint was on login: I got a helpful message informing me that my home directory (e.g. /home/myusername) didn’t exist. Which was interesting, because once logged in I could easily cd to that directory, which plainly did exist on the filesystem.

The next issue was with ssh: although I could connect via ssh as my normal user, it wasn’t possible to use public key auth, based on the authorized_keys file in my home directory. It was as though the ssh process wasn’t able to access my home directory…

As it turned out, the culprit was SELinux. Because the “source” operating system that I was migrating from didn’t have SELinux enabled, and the “destination” one did, there weren’t proper ‘security contexts’ (extended attributes) on the files stored on /home.

The solution was pretty trivial: I had to run # restorecon -R -v /home (note as root!), which took a few minutes, and then everything worked as expected. This was something I only discovered after much searching, on this forum posting regarding a Fedora 12 install. I’m noting it here in part so that perhaps other people in the future can find it more easily. And because, unfortunately, there are forums filled with people experiencing the same problem and receiving terrible advice that they need to reformat /home (in effect, throw away all their data) in order to upgrade or change distros.

Bottom line: if you are running into weird issues on login (console or SSH) after an upgrade from a non-SELinux distro to a SELinux-enabled one, try rebuilding the security context before taking any drastic steps.

0 Comments, 0 Trackbacks

[/technology] permalink