Kadin2048's Weblog
JulAug Sep
Oct Nov Dec


Fri, 26 Aug 2016

The other day I discovered an interesting Python behavior that I somehow had managed not to hit before — in fairness, I use Python mostly for scripting and automation, not ‘real’ software development, but I still thought I understood the basics reasonably well.

Can you spot the problem? The following is designed to remove words from a list if they are below a certain number of characters, specified by args.minlength:

for w in words:
    if len(w) < int(args.minlength):

The impending misbehavior, if you didn’t catch it by this point, is not necessarily obvious. It won’t barf an error at you, and you can actually get it to pass a trivial test, depending on how the test data is configured. But on a real dataset, you’ll end up with lots of words shorter than args.minlength left in words after you (thought) you iterated through and cleaned them!

(If you want to play with this on your own, try running the above loop against the contents of your personal iSpell dictionary — typically ~/.ispell_english on Unix/Linux — or some other word list. The defect will quickly become apparent.)

A good description to the problem, along with several solutions, is of course found on Stack Overflow. But to save you the click: the problem is iterating over a mutable object, such as a list, and then modifying the list (e.g. by removing items) inside the loop. Per the Python docs, you shouldn’t do that:

If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy.

The solution is easy:

for w in words[:]:
    if len(w) < int(args.minlength):

Adding the slice notation causes Python to iterate over a copy of the list (pre-modification), which is what you actually want most of the time, and then you’re free to modify the actual list all you want from inside the loop. There are lots of other possible solutions if you don’t like the slice notation, but that one seems pretty elegant (and it’s what’s recommended in the Python docs so it’s presumably what someone else reading your code ought to expect).

I’d seen the for item in list[:]: construct in sample code before, but the exact nature of the bugs it prevents hadn’t been clear to me before. Perhaps this will be enlightening to someone else as well.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Mon, 15 Aug 2016

Very cool open-source project VeraCrypt is all over the news this week, it seems. First when they announced that they were going to perform a formal third-party code audit, and had come up with the funds to pay for it; and then today when they claimed their emails were being intercepted by a “nation-state” level actor.

The audit is great news, and once it’s complete I think we’ll have even more confidence in VeraCrypt as a successor to TrueCrypt (which suffered from a bizarre developer meltdown1 back in 2014).

The case of the missing messages

However, I’m a bit skeptical about the email-interception claim, at least based on the evidence put forward so far. It may be the case — and, let’s face it, should be assumed — that their email really is being intercepted by someone, probably multiple someones. Frankly, if you’re doing security research on a “dual use” tool2 like TrueCrypt and don’t think that your email is being intercepted and analyzed, you’re not participating in the same consensus reality as the rest of us. So, not totally surprising on the whole. Entirely believable.

What is weird, though, is that the evidence for the interception is that some messages have mysteriously disappeared in transit.

That doesn’t really make sense. It doesn’t really make sense from the standpoint of the mysterious nation-state-level interceptor, because making the messages disappear tips your hand, and it also isn’t really consistent with how most modern man-in-the-middle style attacks work. Most MITM attacks require that the attacker be in the middle, that is, talking to both ends of the connection and passing information. You can’t successfully do most TLS-based attacks otherwise. If you’re sophisticated enough to do most of those attacks, you’re already in a position to pass the message through, so why not do it?

There’s no reason not to just pass the message along, and that plus Occam’s Razor is why I think the mysteriously disappearing messages aren’t a symptom of spying at all. I think there’s a much more prosaic explanation. Which is not to say that their email isn’t being intercepted. It probably is. But I don’t think the missing messages are necessarily a smoking gun displaying a nation-state’s interest.

Another explanation

An alternative, if more boring, explanation to why some messages aren’t going through has to do with how Gmail handles outgoing email. Most non-Gmail mailhosts have entirely separate servers for incoming and outgoing mail. Outgoing mail goes through SMTP servers, while incoming mail is routed to IMAP (or sometimes POP) servers. The messages users see when looking at their mail client (MUA) are all stored on the incoming server. This includes, most critically, the content of the “Sent” folder.

In order to show you messages that you’ve sent, the default configuration of many MUAs, including Mutt and older versions of Apple Mail and Microsoft Outlook, is to save a copy of the outgoing message in the IMAP server’s “Sent” folder at the same time that it’s sent to the SMTP server for transmission to the recipient.

This is a reasonable default for most ISPs, but not for Gmail. Google handles outgoing messages a bit differently, and their SMTP servers have more-than-average intelligence for an outgoing mail server. If you’re a Gmail user and you send your outgoing mail using a Gmail SMTP server, the SMTP server will automatically communicate with the IMAP server and put a copy of the outgoing message into your “Sent” folder. Pretty neat, actually. (A nice effect of this is that you get a lot more headers on your sent messages than you’d get by doing the save-to-IMAP route.)

So as a result of Gmail’s behavior, virtually all Gmail users have their MUAs configured not to save copies of outgoing messages via IMAP, and depend on the SMTP server to do it instead. This avoids duplicate messages ending up in the “Sent” folder, a common problem with older MUAs.

This is all fine, but it does have one odd effect: if your MUA is configured to use Gmail’s SMTP servers and then you suddenly use a different, non-Google SMTP server for some reason, you won’t get the sent messages in your “Sent” box anymore. All it takes is an intermittent connectivity problem to Google’s servers, causing the MUA to fail over to a different SMTP server (maybe an old ISP SMTP or some other configuration), and messages won’t show up anymore. And if the SMTP server it rolls over to isn’t correctly configured, messages might just get silently dropped.

I know this, because it’s happened to me: I have Gmail’s SMTP servers configured as primary, but also have my ISPs SMTP set up in my MUA, because I have to use them for some other email accounts that don’t come with a non-port-25 SMTP server (and my ISP helpfully blocks outgoing connections on port 25). It’s probably not an uncommon configuration at all.

Absent some other evidence that the missing messages are being caused by a particular attack (and it’d have to be a fairly blunt one, which makes me think someone less competent than nation-state actors), I think it’s easier to chalk the behavior up to misconfiguration than to enemy action.

Ultimately though, it doesn’t really matter, because everyone ought to be acting as though their messages are going to be intercepted as they go over the wire anyway. The Internet is a public network: by definition, there’s no security guarantees in transit. If you want to prevent snooping, the only solution is end-to-end crypto combined with good endpoint hygiene.

Here’s wishing all the best to the VeraCrypt team as they work towards the code audit.

1: Those looking for more information on the TrueCrypt debacle can refer to this Register article or this MetaFilter discussion, both from mid-2014. This 2015 report may also be of interest. But as far as I know, the details of what happened to the developers to prompt the project’s digital self-immolation are still unknown and speculation abounds about the security of the original TrueCrypt.

2: “Dual use” in the sense that it is made available for use by anyone, and can be therefore used for both legitimate/legal and illegitimate/illegal purposes. I think it goes almost without saying that most people in the open-source development community accept the use of their software by bad actors as simply a cost of doing business and a reasonable trade-off for freedom, but this is clearly not an attitude that is universally shared by governments.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Fri, 12 Aug 2016

The work I’ve been doing with Tvheadend to record and time-shift ATSC broadcast television got me thinking about my pile of old NTSC tuner cards, leftover from my MythTV system designed for recording analog cable TV. These NTSC cards aren’t worth much, now that both OTA broadcast and most cable systems have shifted completely over to ATSC and QAM digital modulation schemes, except in one regard: they ought to be able to still receive FM broadcasts.

Since the audio component of NTSC TV transmissions is basically just FM, and the NTSC TV bands completely surround the FM broadcast band on both sides, any analog TV reciever should have the ability to receive FM audio as well — at least in mono (FM stereo and NTSC stereo were implemented differently, the latter with a system called MTS). But of course whether this is actually possible depends on the tuner card’s implementation.

I haven’t plugged in one of my old Hauppage PCI tuner cards yet, although they may not work because they contain an onboard MPEG-2 hardware encoder — a feature I paid dearly for, a decade ago, because it reduces the demand on the host system’s processor for video encoding significantly — and it wouldn’t surprise me if the encoder failed to work on an audio-only signal. My guess is that the newer cards which basically just grab a chunk of spectrum and digitize it, leaving all (or most) of the demodulation to the host computer, will be a lot more useful.

I’m not the first person to think that having a ‘TiVo for radio’ would be a neat idea, although Googling for anything in that vein gets you a lot of resources devoted to recording Internet “radio” streams (which I hate referring to as “radio” at all). There have even been dedicated hardware gadgets sold from time to time, designed to allow FM radio timeshifting and archiving.

  • Linux based Radio Timeshifting is a very nice article, written back in 2003, by Yan-Fa Li. Some of the information in it is dated now, and of course modern hardware doesn’t even break a sweat doing MP3 encoding in real time. But it’s still a decent overview of the problem.
  • This Slashdot article on radio timeshifting, also from 2003 (why was 2003 such a high-water-mark for interest in radio recording?), still has some useful information in it as well.
  • The /drivers/media/radio tree in the Linux kernel contains drivers for various varieties of FM tuners. Some of the supported devices are quite old (hello, ISA bus!) while some of them are reasonably new and not hard to find on eBay.

Since I have both a bunch of old WinTV PCI cards and a newer RTL2832U SDR dongle, I’m going to try to investigate both approaches: seeing if I can use the NTSC tuner as an over-engineered FM reciever, and if that fails maybe I’ll play around with RTL-SDR and see if I can get that to receive FM broadcast at reasonable quality.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Wed, 27 Jul 2016

I noticed that on several Dell laptops that I’ve upgraded to Debian 8 ‘Jessie’ with the XFCE desktop environment, that the keyboard mute button had stopped working. Or rather, the button mutes the audio just fine, but pressing it again doesn’t actually unmute again. To get audio back, you have to manually invoke alsamixer and unmute from there.

After more searching than it seemed this problem ought to require, I found this StackExchange answer, which references a post on Rony Lutsky’s blog which gave me the solution.

It turns out that the fix is remarkably simple:

sudo apt-get install gstreamer0.10-pulseaudio

You can then, if you want, verify that it worked by running xfconf-query -lc xfce4-mixer before and after installing gstreamer0.10, but this isn’t a key part of the process.

From what I can tell, the issue is a missing dependency in one of the XFCE audio packages, but I’m damned if I know which one exactly.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Fri, 24 Jun 2016

Just a quick tip, because I found this information absurdly hard to find online using the search terms I was using. If anyone else out there has a Dell Latitude E6410 laptop, and wants to use it under Linux and achieve the same scrolling behavior as under Windows, using the big center button under the ‘DualPoint Stick’ (Dell’s term for the Touchpoint-ish control in the middle of the keyboard) to scroll, here’s what you need to do:

Create a new file in /usr/share/X11/xorg.conf.d/; I called it 60-wheel-emulation.conf, although the filename isn’t especially important as long as it doesn’t start with a number lower than the other files in the directory.

E.g. you can just do:

$ sudo emacs /usr/share/X11/org.conf.d/60-wheel-emulation.conf

In the file, add the following:

Section "InputClass"
   Identifier "Wheel Emulation"
   MatchProduct "DualPoint Stick"
   Option "EmulateWheel" "on"
   Option "EmulateWheelButton" "2"
   Option "XAxisMapping" "6 7"
   Option "YAxisMapping" "4 5"

This activates a feature called (as you may have figured out) Wheel Emulation, which simulates scroll wheel behavior when a button is pressed and the mouse — or in this case, the pointing stick — is moved. In Windows, this is the default behavior for the Dell DualPoint, but in Linux, the default behavior is for that button to behave as an (absurdly large) traditional middle-click mouse button, which pastes the clipboard.

On a regular mouse, the Linux behavior (paste) is arguably a lot more useful, particularly if you also have an actual scrollwheel. But on the E6410, with the pointing stick, I think that scrolling is a lot more common of an interaction than paste, and I found that I really missed it.

This restores the functionality to what you may be used to.

Further information can be found at this Unix Stackexchange question which is where I got the original tip. Note that you can’t just copy and paste from that page and have it work on a Dell; the product name is wrong. You can determine the product name as described there, using the xinput --list command, however, if you have another model or brand of laptop.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Sun, 08 Apr 2012

For no particularly good reason, I decided I wanted to play around with IBM VM earlier this weekend. Although this would seem on its face to be fairly difficult — VM/370 is a mainframe OS, after all — thanks to the Hercules emulator, you can get it running on either Windows or Linux fairly easily.

Unfortunately, many of the instructions I found online were either geared towards people having trouble compiling Hercules from source (which I avoided thanks to Ubuntu’s binaries), or assume a lot of pre-existing VM/370 knowledge, or are woefully out of date. So here are just a couple of notes should anyone else be interested in playing around with a fake mainframe…

Some notes about my environment:

  • Dual-core AMD Athlon64 2GHz
  • 1 GB RAM (yes, I know, it needs more memory)
  • Ubuntu 10.04 LTS, aka Lucid

Ubuntu Lucid has a good binary version of Hercules in the repositories. So no compilation is required, at least not for any of the basic features that I was initially interested in. A quick apt-get hercules and apt-get x3270 were the only necessities.

In general, I followed the instructions at gunkies.org: Installing VM/370 on Hercules. However, there were a few differences. The guide is geared towards someone running Hercules on Windows, not Linux.

  • You do not need to set everything up in the same location as the Hercules binaries, as the guide seems to indicate. I created a vm370 directory in my user home, and it worked fine as a place to set up the various archives and DASD files (virtual machine drives).

  • The guide takes you through sequences where you boot the emulated machine, load a ‘tape’, reboot, then load the other ‘tape’. When I did this, the second load didn’t work (indefinite hang until I quit the virtual system from the Hercules console). But after examining the DASD files, it seemed like the second tape had loaded anyway, but the timestamp indicated that it had loaded at the same time as the first tape. I think that they both loaded one after the other in the first boot cycle — hard to tell for sure at this point, but don’t be too concerned if things don’t seem to work as described; I got a working system anyway. Update: The instructions work as described; I had a badly set-up DASD file that was causing an error, which did not show itself until later when I logged in and tried to start CMS.

  • To get a 3270 connection, I had to connect to on port 3270; trying to connect to “localhost” didn’t work. I assume this is just a result of how Hercules is set up to listen, but it caused me to waste some time.

  • The tutorial tells you to start Hercules, then connect your 3270 emulator to the virtual system, then run the ipl command; the expected result is to see the loader on the 3270. For me, this didn’t work… the 3270 display just hung at the Hercules splash screen. To interact with the loader, I had to disconnect and reconnect the 3270 emulator. So, rather than starting Hercules, connecting the 3270, then ipl-ing, it seems easier to start Hercules, ipl, then connect and operate the loader.

Of course, when you get through the whole procedure, what you’ll have is a bare installation of VM/370… without documentation (or extensive previous experience), you can’t do a whole lot. That’s what I’m figuring out now. Perhaps it’ll be the subject of a future entry.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Wed, 13 Oct 2010

This is just a quick note, mostly for my own reference, of a few ways to easily delete the dot-underscore (._foo, ._bar, etc.) files created by (badly-behaved) Mac OS X systems on non-AFP server volumes.

First of all, if you’re in a mixed-platform environment, you probably want to run this command on your Mac:

defaults write com.apple.desktopservices DSDontWriteNetworkStores true

This doesn’t stop the creation of dot-underscore (resource fork) files, but it does at least cut down on the creation of their equally-obnoxious cousin, the “.DS_Store” file. I’m not aware of a way to automatically and persistently suppress the creation of resource fork files on platforms that don’t deal with resource forks, though.

For the record, it’s not that I’m against the idea of resource forks or filesystem metadata … I think metadata is great and I wish filesystems supported more of it! But hacky solutions like .DS_Store and dot-underscore resource forks are not going to convince anyone who’s on the fence, and give the Mac OS a reputation for crapping all over shared network resources.

To get rid of the dot-underscore files, the most efficient way is using find from the Unix side of things:

find . -name '._*' -exec ls {} \;

Once you’ve verified that you’re only looking at files you want to delete, kill them with:

find . -name '._*' -exec rm -v {} \;

And if you have .DS_Store files around that you need to zap as well, then you’d just do:

find . -name '.DS_Store' -exec rm -v {} \;

The -v switch on rm isn’t strictly necessary, of course, but I like it just so I can see what’s going on. If you’re hardcore, you can omit it. Note that the single-quotes around the search string being passed to find are crucial; if you use double-quotes, your shell will (more than likely, depending on the shell) expand the string before it gets to find. Not good.

A certain amount of caution is advised when running this — although the files are basically useless on any non-Mac platform, they do contain Finder comments and HFS+ EAs, which are significant on OS X and could be important to some users. This is not something you’d want to run globally on a shared system, for instance, unless it was as part of a script that checked to see whether the dot-underscore file was an orphan, or something with similar safeguards.

Unfortunately I don’t see the need for this going away, unless Apple finds some more elegant solution for dealing with Mac-specific metadata in mixed environments. It would be great if copying files from a Mac to a Linux-backed SMB share automatically preserved all the HFS+ metadata and turned it into ext4 extended attributes, obviating the need for the dot-underscore files… I am not going to hold my breath for that, though.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Fri, 11 Sep 2009

Earlier today my copy of Quicken 2006 for Mac began refusing to download transaction activity from any of my bank or credit card accounts, complaining about an “OL-249” error. It took a bit of Googling to figure out what was going on, so I thought I’d post the solution here.

Short version: you need to download this fairly obscure patch from the Quicken website and install it. You should do this after updating via the regular File/Check for Updates option, and it is in addition to the updates provided via that route.

Longer explanation: from what I can tell, the certificates included with Quicken 2005, 2006 (which I use), and 2007 had relatively short expiration dates. They expired, and for some reason either weren’t or couldn’t be updated via the built-in update mechanism. Hence the additional patch. Why they couldn’t do this via a regular update push I’m not sure, but at least they made them available somehow — I would have half expected them to just tell everyone to upgrade.

Once I ran the installer against Quicken.app, online transaction downloading worked fine once again.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Tue, 02 Dec 2008

Yesterday, I finally got around to upgrading my home server from Ubuntu 6.06 LTS (aka ‘Dapper Drake’) to the latest “long-term support” release, 8.04 LTS (‘Hardy Heron’).

Pretty much everything went according to plan. Since my server is headless, I was a bit nervous about the whole thing — having to attach a monitor and keyboard to it would have been a major problem. But this turned out to be unwarranted; the whole procedure was quite smooth.

The only issue I did run into was the dreaded “can’t set the locale; make sure $LC_* and $LANG are correct” problem, after I rebooted. This is a very common issue, and if you’re a Linux or BSD user and you haven’t run across it yet, chances are at some point you will. A quick search using Google will turn up hundreds of people looking for solutions.

Unfortunately it’s a nasty issue because there are many reasons why it can happen. In my case, none of the solutions suggested in most forum posts (run dpkg-reconfigure locale, check locale -a, etc.) worked. However, I did notice that when I looked at the current values of $LANG and $LC_ALL, they were incorrect.

In particular:

$ echo $LANG

This is wrong. The correct locale specifies a text encoding, so a proper value is en_US.UTF-8, not just en_US.

Unfortunately, it took me a long time to figure out where to set this value. Throwing it into my .bashrc would have solved the problem when I was logged in and running things as my user, but it wouldn’t have prevented it from cropping up when the root user’s cron tasks ran automatically (which results in me getting sent error emails every few minutes; pretty annoying).

What I wanted was to set LANG=en_US.UTF-8 as a global variable for the entire system, for all users, all the time, whether running interactively or not. In order to do this, the file /etc/environment must be edited. This file holds global variables that apply to the entire system: typically just the locale and a bare-minimum PATH.

To /etc/environment I added (the first line was present but specified “en_US”):


In order to get this to take effect, I had to restart all my open shells, including a few instances of screen. However, it made the problems go away.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Sun, 31 Aug 2008

I decided to do a little playing around earlier this weekend with Python and CGI scripts. Just for something to do, I kludged together a little comment form for this site. It’s not yet operational — I still haven’t figured out how to get ReCaptcha working via a CGI here on the SDF — but it hopefully will show up some day.

Anyway, I ran into a weird issue when trying to write to an “mbox”-format mail spool file using Python. Basically, rather than actually sending email from within my CGI script, I instead just wanted to take the user’s form input and write it to an mbox-style spool file somewhere on the filesystem, for later perusal using an MUA.

In theory, this should be fairly simple. Python comes with a standard library called “mailbox” that’s purpose-built for working with a variety of spool/mailbox file types, and can add messages to them with ease. Unfortunately, I can’t seem to get it to work right; specifically, the message envelope delimiters don’t seem to be getting written correctly.

In an mbox-format spool file, each message is delimited by a string consisting of a newline, the word “From”, and a space. What comes after the word “From” isn’t really that important, but typically it’s the actual ‘From’ address followed by a timestamp. The crucial part in all this is that, with the exception of the very first message in an mbox file, the delimiter line that begins each message must be preceded by a blank line.

In other words, when writing new messages to an mbox file, you need to always start by writing a newline, or else you need to be religious (and check for the presence of) about ending the text of each message with no less than two newline characters, in order to guarantee a blank line at the end. (According to the Qmail docs, the blank line is considered part of the end of the preceding message, rather than part of the ‘From_’ delimiter.)

Supposedly, when you use Python’s mailbox.mboxMessage class in conjunction with mailbox.mbox to create message objects and write them to a file, this should all be handled. However, it doesn’t seem to be working for me.

The code looks something like this (similar lines removed for clarity):

mailmsg = mailbox.mboxMessage()
mailmsg['To'] = 'Kadin'
mailmsg['From'] = formdata['from'].value
# Other headers removed...
mailmsg.set_payload( formdata['message'].value )

mboxfile = mailbox.mbox('/tmp/'+str( datetime.date.today() )+'.mbox',factory=None,create=True)

From my reading of the documentation and some similar code samples, this should produce a correctly-formatted mbox file — but it doesn’t. Instead, it produces this:

From MAILER-DAEMON Sun Aug 31 06:48:30 2008
To: Kadin
From: Testuser
Subject: FORMMAIL:Test Subject
Date: Sun Aug 31 02:48:30 2008
Reply-To: test@test.example

Test message would go here.
From MAILER-DAEMON Sun Aug 31 06:48:46 2008
To: Kadin
From: Testuser2
Subject: FORMMAIL:Test Subject 2
Date: Sun Aug 31 02:48:46 2008
Reply-To: test2@test.example

Another message would go here.

Notice that there’s no empty line between the two messages? That means that when the mbox file is parsed by most applications, they don’t see all the messages in the box. Instead, they simply assume that (since there’s no valid delimiters) there’s just one really long message, and display it as such.

While I think I might be able to fix this by just adding a couple of newlines onto the entered text before it gets incorporated into the message object’s payload, that doesn’t seem like how things should have to work. Unless I’m just misunderstanding the mbox format (there are enough varieties of it, so it’s possible), it doesn’t seem like that ought to be required.

Most likely, I’m doing something wrong, but I can’t seem to figure out what … time to throw in the towel and come back to it tomorrow.

0 Comments, 0 Trackbacks

[/technology/software] permalink