Kadin2048's Weblog
2016
Months
OctNov Dec

RSS

Wed, 14 Sep 2016

Everyone’s favorite security analyst Bruce Schneier seems to think that somebody is learning how to “take down the Internet” by repeatedly probing key pieces of “basic infrastructure” — exactly what’s being probed isn’t stated, but the smart money is on the DNS root servers. Naturally, who is doing this is left unsaid as well, although Schneier does at least hazard the obvious guess at China and Russia.

If this is true, it’s a seemingly sharp escalation towards something that might legitimately be called ‘cyberwarfare’, as opposed to simply spying-using-computers, which is most of what gets lumped in under that label today. Though, it’s not clear exactly why a state-level actor would want to crash DNS; it’s arguably not really “taking down the Internet”, although it would mess up a lot of stuff for a while. Even if you took down the root DNS servers, it wouldn’t stop IP packets from being routed around (the IP network itself is pretty resilient), and operators could pretty quickly unplug their caching DNS resolvers and let them run independently, restoring service to their users. You could create a mess for a while, but it wouldn’t be crippling in the long term.

Except perhaps as one component of a full-spectrum, physical-world attack, it doesn’t make a ton of sense to disrupt a country’s DNS resolvers for a few hours. And Russia and China don’t seem likely to actually attack the U.S. anytime soon; relations with both countries seem to be getting worse over time, but they’re not shooting-war bad yet. So why do it?

The only reason that comes to mind is that it’s less ‘preparation’ than ‘demonstration’. It’s muscle flexing on somebody’s part, and not particularly subtle flexing at that. The intended recipient of the message being sent may not even be the U.S., but some third party: “see what we can do to the U.S., and imagine what we can do to you”.

Or perhaps the eventual goal is to cover for a physical-world attack, but not against the U.S. (where it would probably result in the near-instant nuclear annihilation of everyone concerned). Perhaps the idea is to use a network attack on the U.S. as a distraction, while something else happens in the real world? Grabbing eastern Ukraine, or Taiwan, just as ideas.

Though an attack on the DNS root servers would be inconvenient in the short run, I am not sure that in the long run that it would be the worst thing to happen to the network as an organism: DNS is a known weakness of the global Internet already, one that desperately needs a fix but where there’s not enough motivation to get everyone moving together. An attack would doubtless provide that motivation, and be a one-shot weapon in the process.

Update: This article from back in April, published by the ‘Internet Governance Project’, mentions a Chinese-backed effort to weaken US control over the root DNS, either by creating additional root servers or by potentially moving to a split root. So either the probing or a future actual disruption of DNS could be designed to further this agenda.

In 2014, [Paul] Vixie worked closely with the state-owned registry of China (CNNIC) to promote a new IETF standard that would allow the number of authoritative root servers to increase beyond the current limit of 13. As a matter of technical scalability, that may be a good idea. The problem is its linkage to a country that has long shown a more than passing interest in a sovereign Internet, and in modifying the DNS to help bring about sovereign control of the Internet. For many years, China has wanted its “own” root server. The proposal was not adopted by IETF, and its failure there seems to have prompted the formation and continued work of the YETI-DNS project.

The YETI-DNS project appears, at the moment, to be defunct. Still, China would seem to have the most to gain by making the current U.S.-based root DNS system seem fragile, given the stated goal of obtaining their own root servers.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 11 Sep 2016

If you can only bear to read one 9/11 retrospective or tribute piece this year, I’d humbly suggest — if you are not already familiar — reading the story of Rick Rescorla, one of the many heroes of the WTC evacuation.

The Real Heroes Are Dead, written by James B. Stewart in The New Yorker, from February 2002, is worth the read.

0 Comments, 0 Trackbacks

[/other] permalink

Fri, 09 Sep 2016

This was originally posted to Hacker News as a comment in a discussion about “microhousing”. The question I was responding to was:

What is NIMBY for microhousing based on?

This is an ongoing argument in Northern Virginia (which is not quite as expensive as SF / Seattle / NYC, but probably only one cost tier below that) over micro-housing, typically in the form of backyard apartments and the subdivision of single-family homes into boarding houses, and the major arguments are basically the same issues that apply to all “just build more housing, stupid” proposals.

Basically, if you suddenly build a lot more housing, you’d start to strain the infrastructure of the community in other ways. That strain is really, really unpleasant to other people who share the infrastructure, and so current residents — who are often already feeling like things are strained and getting worse over time — would rather avoid making things worse. The easiest way to avoid making things worse is just to control the number of residents, and the easiest way to do that is to control the amount of housing: If you don’t live here, you’re probably not using the infrastructure. QED.

In many ways, building more housing is the easiest problem to solve when it comes to urban infrastructure. Providing a heated place out of the rain just isn’t that hard, compared to (say) transportation or schools or figuring out economically sustainable economic balance.

Existing residents are probably (and reasonably) suspicious that once a bunch of tiny apartments are air-dropped in, and then a bunch of people move in to fill them up, that there won’t be any solution to any of the knock-on problems that will inevitably result — parking, traffic, school overcrowding, tax-base changes, stress to physical infrastructure like gas/water/sewer/electric systems — until those systems become untenably broken. I mean, I can’t speak to Seattle, but those things are already an increasingly-severe problem today, with the current number of residents, in my area, and people don’t have much faith in government’s ability to fix them; so the idea that the situation will be improved once everyone installs a couple of backyard apartments is ridiculous. (And then there are questions like: how are these backyard apartments going to be taxed? Are people who move in really going to pay more in taxes than they consume in services and infrastructure impact, or is this going to externalize costs via taxes on everyone else? There’s no clear answer to these questions, and people are reluctant to become the test case.)

If you want more housing, you need more infrastructure. If you want more infrastructure, either you need a different funding model or you need better government and more trust in that government. Our government is largely (perceived to be) broken, and public infrastructure is (perceived to be) broken or breaking, and so the unsurprising result is that nobody wants to build more housing and add more strain to a system that’s well beyond its design capacity anyway.

That’s why there’s so much opposition to new housing construction, particularly to ideas that look just at ways to provide more housing without doing anything else. You’re always going to get a lot of opposition to “just build housing” proposals unless they’re part of a compelling plan to actually build a community around that new housing.

0 Comments, 0 Trackbacks

[/politics] permalink

Fri, 26 Aug 2016

Bruce Schneier has a new article about the NSA’s basically-all-but-confirmed stash of ‘zero day’ vulnerabilities on his blog, and it’s very solid, in typical Bruce Schneier fashion.

The NSA Is Hoarding Vulnerabilities

I won’t really try to recap it here, because it’s already about as concise as one can be about the issue. However, there is one thing in his article that I find myself mulling over, which is his suggestion that we should break up the NSA:

And as long as I’m dreaming, we really need to separate our nation’s intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency’s mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS’s mission.

Far be it from me to second-guess Schneier on most topics, but that just doesn’t seem to make a whole lot of sense. If the key problem is that vulnerabilities are being hoarded for offensive use rather than being shared with manufacturers (defensive use), it doesn’t seem like splitting those two missions into separate agencies is going to improve things. And the predictable result is that we’re then going to have two separate agencies working against one another, doing essentially the same research, looking for the same underlying vulnerabilities, for different aims. That seems… somewhat inefficient.

And if history is any guide, the U.S. will probably spend more on offensive armaments than on defense. Contrary to the Department of Defense’s name, since the end of WWII we have based our national-defense posture largely on a policy of force projection and deterrence-through-force, and I am highly skeptical that, as a nation, we’re going to suddenly take a different tack when it comes to “cyberwarfare” / IT security. The tension between offense and defense isn’t unique to IT: it exists in lots of other places, from ICBMs to vehicle armor, and in most cases U.S. doctrine emphasizes the offensive, force-projective capability. This is practically a defining element of U.S. strategic doctrine over the past 60 years.

So the net result of Schneier’s proposal would probably be to take the gloves off the NSA: relieve it of the defensive mission completely, giving it to DHS — which hardly seems capable of taking on a robust cyberdefense role, but let’s ignore that for the sake of polite discussion — but almost certainly emerge with its funding and offensive role intact. (Or even if there was a temporary shift in funding, since our national adversaries have, and apparently make use of, offensive cyberwarfare capabilities, it would only be a matter of time until we felt a ‘cyber gap’ and turned on the funding tap again.) This doesn’t seem like a net win from a defense standpoint.

I’ll go further, admittedly speculation: I suspect that the package of vulnerabilities (dating from 2013) that are currently being “auctioned” by the group calling themselves the Shadow Brokers probably owe their nondisclosure to some form of internal firewalling within NSA as an organization. That is to say, the sort of offensive/defensive separation that Schneier is seemingly proposing at a national level probably exists within NSA already and is related to why the zero-day vulnerabilities weren’t disclosed. We’ll probably never know for sure, but it wouldn’t surprise me if someone was hoarding the vulnerabilities within or for a particular team or group, perhaps in order to prevent them from being subject to an “equities review” process that might decide they were better off being disclosed.

What we need is more communication, not less, and we need to make the communication flow in a direction that leads to public disclosure and vulnerability remediation in a timely fashion, while also realistically acknowledging the demand for offensive capacity. Splitting up the NSA wouldn’t help that.

However, in the spirit of “modest proposals”, a change in leadership structure might: currently, the Director of the NSA is also the Commander of the U.S. Cyber Command and Chief of the Central Security Service. It’s not necessarily clear to me that having all those roles, two-thirds of which are military and thus tend to lean ‘offensive’ rather than ‘defensive’, reside in the same person is ideal, and perhaps some thought should be given to having the NSA Director come from outside the military, if the goal is to push the offensive/defensive pendulum back in the opposite direction.

0 Comments, 0 Trackbacks

[/politics] permalink

The other day I discovered an interesting Python behavior that I somehow had managed not to hit before — in fairness, I use Python mostly for scripting and automation, not ‘real’ software development, but I still thought I understood the basics reasonably well.

Can you spot the problem? The following is designed to remove words from a list if they are below a certain number of characters, specified by args.minlength:

for w in words:
    if len(w) < int(args.minlength):
        words.remove(w)

The impending misbehavior, if you didn’t catch it by this point, is not necessarily obvious. It won’t barf an error at you, and you can actually get it to pass a trivial test, depending on how the test data is configured. But on a real dataset, you’ll end up with lots of words shorter than args.minlength left in words after you (thought) you iterated through and cleaned them!

(If you want to play with this on your own, try running the above loop against the contents of your personal iSpell dictionary — typically ~/.ispell_english on Unix/Linux — or some other word list. The defect will quickly become apparent.)

A good description to the problem, along with several solutions, is of course found on Stack Overflow. But to save you the click: the problem is iterating over a mutable object, such as a list, and then modifying the list (e.g. by removing items) inside the loop. Per the Python docs, you shouldn’t do that:

If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy.

The solution is easy:

for w in words[:]:
    if len(w) < int(args.minlength):
        words.remove(w)

Adding the slice notation causes Python to iterate over a copy of the list (pre-modification), which is what you actually want most of the time, and then you’re free to modify the actual list all you want from inside the loop. There are lots of other possible solutions if you don’t like the slice notation, but that one seems pretty elegant (and it’s what’s recommended in the Python docs so it’s presumably what someone else reading your code ought to expect).

I’d seen the for item in list[:]: construct in sample code before, but the exact nature of the bugs it prevents hadn’t been clear to me before. Perhaps this will be enlightening to someone else as well.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Tue, 23 Aug 2016

About fifty pages into John Bruce Medaris’s 1960 autobiography Countdown for Decision, there is an unsourced quote attributed to Col. C.G. Patterson, who in 1944 was in charge of Anti-Aircraft Artillery for the U.S. First Army, outlining the concept of a “technological casualty”:

“If a weapon costs more to build, in money, materials, and manpower, than it costs the enemy to repair the damage the weapon causes, the user has suffered a technological casualty. In any long-drawn-out struggle this might be the margin between victory and defeat.” 1

As far as I can tell, the term “technological casualty” never passed into general usage with that meaning, which is unfortunate. And although sources do confirm that Col. Patterson existed and by all accounts served admirably as the commander of air defense artillery for First Army in 1944, there doesn’t appear to be much record outside of Medaris’ book of the quote. Still, credit where it is most likely due; if ever a shorthand name for this idea is required, I might humbly suggest “Patterson’s Dictum”. (It also sounds good.)

I suspect, given Patterson’s role at the time, that the original context of the quote had to do with offensive or defensive air capability. Perhaps it referred to the attrition of German capability that was at that point ongoing. In Countdown, Medaris discusses it in the context of the V-2, which probably consumed more German war resources to create than they destroyed of Allied ones. But it is certainly applicable more broadly.

On its face, Patterson’s statement assumes a sort of attritative, clash-of-civilizations, total-commitment warfare, where all available resources of one side are stacked against all available resources of the other. One might contend that it doesn’t seem to have much applicability in the age of asymmetric warfare, now that we have a variety of examples of conflicts ending in a victory — in the classic Clausewitzian political sense — by parties who never possessed any sort of absolute advantage in money, materials, or manpower.

But I would counter that even in the case of a modern asymmetric war, or realpolitik-fueled ‘brushfire’ conflicts with limited aims, the fundamental calculus of war still exists, it just isn’t as straightforward. Beneath all the additional terms that get added to the equation is the essential fact that defeat is always possible if victory proves too expensive. Limited war doesn’t require you outspend your adversary’s entire society, only their ‘conflict budget’: their willingness to expend resources in that particular conflict.

Which makes Patterson’s point quite significant: if a modern weapons system can’t subtract as much from an adversary’s ‘conflict budget’ — either through actual destructive power, deterrence, or some other effect — as it subtracts from ours in order to field it (including the risk of loss), then it is essentially a casualty before it ever arrives.

1: Countdown for Decision (1960 ed.), page 51.

0 Comments, 0 Trackbacks

[/politics] permalink

Mon, 22 Aug 2016

Ars Technica has a nice article, published earlier this month, on the short life of the Digital Compact Cassette format, one of several attempts to replace the venerable analog cassette tape with a digital version, prior to its eventual demise in the download era.

At risk of dating myself, I remember the (very brief) rise and (anticlimactic) fall of the Digital Compact Cassette, although I was a bit poor to be in the target market of early adopters and hi-fi-philes that the first decks were targeted to. And while the Ars article is decent, it ignores the elephant in the room that contributed mightily to DCC’s demise: DRM.

DCC was burdened by a DRM system called SCMS, also present in the consumer version of DAT. This inclusion was not the fault of Philips or Matsushita (later Panasonic), who designed DCC, but a result of an odious RIAA-backed law passed in 1992, the Audio Home Recording Act, which mandated it in all “digital audio recording device[s]”.

It is telling that of the variety of formats which were encumbered by SCMS, exactly zero of them have ever succeeded in the marketplace in a way that threatened the dominant formats. The AHRA was (and remains, de jure, because it’s still out there on the books, a piece of legal “unexploded ordnance” waiting for someone to step on it) the RIAA’s most potent and successful weapon in terms of suppressing technological advancement and maintaining the status quo throughout the 1990s.

Had it not been for the AHRA and SCMS, I think it’s likely that US consumers might have had not one but two alternative formats for digital music besides the CD, and perhaps three: consumer DAT, DCC, and MiniDisc. Of these, DAT is probably the best format from a pure-technology perspective — it squeezes more data into a smaller physical space than the other two, eliminating the need for lossy audio compression — but DAT decks are mechanically complex, owing to their helical scan system, and the smallest portable DATs never got down to Walkman size. DCC, on the other hand, used a more robust linear tape system, and perhaps most importantly it was compatible with analog cassette tapes. I think there is a very good chance that it could have won the battle, if the combatants had been given a chance to take the field.

But the AHRA and SCMS scheme conspired to make both consumer-grade DAT and DCC unappealing. Unlike today, where users have been slowly conditioned to accept that their devices will oppose them at every opportunity in the service of corporations and their revenue streams, audio enthusiasts from the analog era were understandably hostile to the idea that their gear might stop them from doing something it was otherwise quite physically capable of doing, like dubbing from one tape to another, or from a CD to a tape, in the digital domain. And a tax on blank media just made the price premium for digital, as opposed to analog, that much higher. If you are only allowed to make a single generation of copies due to SCMS, and if you’re going to pay extra for the digital media due to the AHRA, why not just get a nice analog deck with Dolby C or DBX Type 2 noise reduction, and spend the savings on a boatload of high-quality Type IV metal cassettes?

That was the question that I remember asking myself anyway, at the time. I never ended up buying a DCC deck, and like most of the world continued listening to LPs, CDs, and analog cassettes right up until cheap computer-based CD-Rs and then MP3 files dragged the world of recorded music fully into the digital age, and out of the shadow of the AHRA.

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 16 Aug 2016

Bloomberg’s Matt Levine has a great article, published today, which begins with a discussion of the apparently-hollow shell company “Neuromama” (OTC: NERO), which — cue shocked face — is probably not in reality a $35 billion USD company, but quickly moves into a delightful discussion of insider trading, money market rates, an “underpants gnomes”-worthy business plan, and the dysfunction of the Commodity Futures Trading Commission. There’s even a bonus mention of Uber shares trading on the secondary market, which is something I’ve written about before. Definitely worth a read:

Heavy Ion Fusion and Insider Trading

If you only read one section of it, the part on “When is insider trading a crime?” is, in my humble opinion, probably the best. (Memo to self: next time there’s a big insider-trading scandal, be sure to come back to this.) But really, it’s a good article. Okay, there’s a bit too much gloating about those stupid regulators and their stupid regulations for someone who isn’t a hedge fund manager to get excited about, but it’s fucking Bloomberg, that’s probably a contractual obligation to get printed there. Also it’s Congress’ fault anyway, as usual.

0 Comments, 0 Trackbacks

[/finance] permalink

Mon, 15 Aug 2016

Very cool open-source project VeraCrypt is all over the news this week, it seems. First when they announced that they were going to perform a formal third-party code audit, and had come up with the funds to pay for it; and then today when they claimed their emails were being intercepted by a “nation-state” level actor.

The audit is great news, and once it’s complete I think we’ll have even more confidence in VeraCrypt as a successor to TrueCrypt (which suffered from a bizarre developer meltdown1 back in 2014).

The case of the missing messages

However, I’m a bit skeptical about the email-interception claim, at least based on the evidence put forward so far. It may be the case — and, let’s face it, should be assumed — that their email really is being intercepted by someone, probably multiple someones. Frankly, if you’re doing security research on a “dual use” tool2 like TrueCrypt and don’t think that your email is being intercepted and analyzed, you’re not participating in the same consensus reality as the rest of us. So, not totally surprising on the whole. Entirely believable.

What is weird, though, is that the evidence for the interception is that some messages have mysteriously disappeared in transit.

That doesn’t really make sense. It doesn’t really make sense from the standpoint of the mysterious nation-state-level interceptor, because making the messages disappear tips your hand, and it also isn’t really consistent with how most modern man-in-the-middle style attacks work. Most MITM attacks require that the attacker be in the middle, that is, talking to both ends of the connection and passing information. You can’t successfully do most TLS-based attacks otherwise. If you’re sophisticated enough to do most of those attacks, you’re already in a position to pass the message through, so why not do it?

There’s no reason not to just pass the message along, and that plus Occam’s Razor is why I think the mysteriously disappearing messages aren’t a symptom of spying at all. I think there’s a much more prosaic explanation. Which is not to say that their email isn’t being intercepted. It probably is. But I don’t think the missing messages are necessarily a smoking gun displaying a nation-state’s interest.

Another explanation

An alternative, if more boring, explanation to why some messages aren’t going through has to do with how Gmail handles outgoing email. Most non-Gmail mailhosts have entirely separate servers for incoming and outgoing mail. Outgoing mail goes through SMTP servers, while incoming mail is routed to IMAP (or sometimes POP) servers. The messages users see when looking at their mail client (MUA) are all stored on the incoming server. This includes, most critically, the content of the “Sent” folder.

In order to show you messages that you’ve sent, the default configuration of many MUAs, including Mutt and older versions of Apple Mail and Microsoft Outlook, is to save a copy of the outgoing message in the IMAP server’s “Sent” folder at the same time that it’s sent to the SMTP server for transmission to the recipient.

This is a reasonable default for most ISPs, but not for Gmail. Google handles outgoing messages a bit differently, and their SMTP servers have more-than-average intelligence for an outgoing mail server. If you’re a Gmail user and you send your outgoing mail using a Gmail SMTP server, the SMTP server will automatically communicate with the IMAP server and put a copy of the outgoing message into your “Sent” folder. Pretty neat, actually. (A nice effect of this is that you get a lot more headers on your sent messages than you’d get by doing the save-to-IMAP route.)

So as a result of Gmail’s behavior, virtually all Gmail users have their MUAs configured not to save copies of outgoing messages via IMAP, and depend on the SMTP server to do it instead. This avoids duplicate messages ending up in the “Sent” folder, a common problem with older MUAs.

This is all fine, but it does have one odd effect: if your MUA is configured to use Gmail’s SMTP servers and then you suddenly use a different, non-Google SMTP server for some reason, you won’t get the sent messages in your “Sent” box anymore. All it takes is an intermittent connectivity problem to Google’s servers, causing the MUA to fail over to a different SMTP server (maybe an old ISP SMTP or some other configuration), and messages won’t show up anymore. And if the SMTP server it rolls over to isn’t correctly configured, messages might just get silently dropped.

I know this, because it’s happened to me: I have Gmail’s SMTP servers configured as primary, but also have my ISPs SMTP set up in my MUA, because I have to use them for some other email accounts that don’t come with a non-port-25 SMTP server (and my ISP helpfully blocks outgoing connections on port 25). It’s probably not an uncommon configuration at all.

Absent some other evidence that the missing messages are being caused by a particular attack (and it’d have to be a fairly blunt one, which makes me think someone less competent than nation-state actors), I think it’s easier to chalk the behavior up to misconfiguration than to enemy action.

Ultimately though, it doesn’t really matter, because everyone ought to be acting as though their messages are going to be intercepted as they go over the wire anyway. The Internet is a public network: by definition, there’s no security guarantees in transit. If you want to prevent snooping, the only solution is end-to-end crypto combined with good endpoint hygiene.

Here’s wishing all the best to the VeraCrypt team as they work towards the code audit.

1: Those looking for more information on the TrueCrypt debacle can refer to this Register article or this MetaFilter discussion, both from mid-2014. This 2015 report may also be of interest. But as far as I know, the details of what happened to the developers to prompt the project’s digital self-immolation are still unknown and speculation abounds about the security of the original TrueCrypt.

2: “Dual use” in the sense that it is made available for use by anyone, and can be therefore used for both legitimate/legal and illegitimate/illegal purposes. I think it goes almost without saying that most people in the open-source development community accept the use of their software by bad actors as simply a cost of doing business and a reasonable trade-off for freedom, but this is clearly not an attitude that is universally shared by governments.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Fri, 12 Aug 2016

The work I’ve been doing with Tvheadend to record and time-shift ATSC broadcast television got me thinking about my pile of old NTSC tuner cards, leftover from my MythTV system designed for recording analog cable TV. These NTSC cards aren’t worth much, now that both OTA broadcast and most cable systems have shifted completely over to ATSC and QAM digital modulation schemes, except in one regard: they ought to be able to still receive FM broadcasts.

Since the audio component of NTSC TV transmissions is basically just FM, and the NTSC TV bands completely surround the FM broadcast band on both sides, any analog TV reciever should have the ability to receive FM audio as well — at least in mono (FM stereo and NTSC stereo were implemented differently, the latter with a system called MTS). But of course whether this is actually possible depends on the tuner card’s implementation.

I haven’t plugged in one of my old Hauppage PCI tuner cards yet, although they may not work because they contain an onboard MPEG-2 hardware encoder — a feature I paid dearly for, a decade ago, because it reduces the demand on the host system’s processor for video encoding significantly — and it wouldn’t surprise me if the encoder failed to work on an audio-only signal. My guess is that the newer cards which basically just grab a chunk of spectrum and digitize it, leaving all (or most) of the demodulation to the host computer, will be a lot more useful.

I’m not the first person to think that having a ‘TiVo for radio’ would be a neat idea, although Googling for anything in that vein gets you a lot of resources devoted to recording Internet “radio” streams (which I hate referring to as “radio” at all). There have even been dedicated hardware gadgets sold from time to time, designed to allow FM radio timeshifting and archiving.

  • Linux based Radio Timeshifting is a very nice article, written back in 2003, by Yan-Fa Li. Some of the information in it is dated now, and of course modern hardware doesn’t even break a sweat doing MP3 encoding in real time. But it’s still a decent overview of the problem.
  • This Slashdot article on radio timeshifting, also from 2003 (why was 2003 such a high-water-mark for interest in radio recording?), still has some useful information in it as well.
  • The /drivers/media/radio tree in the Linux kernel contains drivers for various varieties of FM tuners. Some of the supported devices are quite old (hello, ISA bus!) while some of them are reasonably new and not hard to find on eBay.

Since I have both a bunch of old WinTV PCI cards and a newer RTL2832U SDR dongle, I’m going to try to investigate both approaches: seeing if I can use the NTSC tuner as an over-engineered FM reciever, and if that fails maybe I’ll play around with RTL-SDR and see if I can get that to receive FM broadcast at reasonable quality.

0 Comments, 0 Trackbacks

[/technology/software] permalink