Kadin2048's Weblog
OctNov Dec


Thu, 20 Oct 2016

For all the stupidity of the current Presidential election, one interesting discussion that it has prompted is a resurrection of the old debate over nuclear strategy, and particularly the strategy of “launch under attack” (aka and better known as “Launch On Warning”). Jeffrey Lewis has an article, “Our Nuclear Procedures Are Crazier Than Trump”, in Foreign Policy which ties this into current events, prompted by recent statements by both candidates.

Much of the discussion in the last 24 hours has centered on whether Hillary Clinton inadvertently disclosed classified information when she mentioned, during the third debate, that the President would have only “four minutes” to decide on whether to respond in the event of a large-scale attack on the continental U.S. by an adversary. This is not, at least to me, a particularly interesting discussion; nothing Clinton said goes beyond what is in the open literature on the topic and has been available for decades.

What is interesting is that, in 2016, we’re talking about Launch On Warning at all. Clinton’s “four minutes” should be a thing of the past.

I mean: the other President Clinton supposedly moved the U.S. away from LOW in a 1997 Presidential Directive, instead putting U.S. forces on a stance of second-strike retaliation only after actually being on the receiving end of a successful attack. This is a reasonable posture, given that the U.S. SSBN force alone has enough destructive power to serve, independently of the rest of the ‘nuclear triad’, as a reasonable deterrent against a first strike by another global power.

What’s interesting is that, at the time, the Clinton administration downplayed the move and said that it was merely a continuation of existing policy dating from the Reagan years and expressed in previous PDDs. A Clinton spokesperson reportedly said at the time: “in this PDD we direct our military forces to continue to posture themselves in such a way as to not rely on launch on warning—to be able to absorb a nuclear strike and still have enough force surviving to constitute credible deterrence.” (Emphasis mine.)

The actual Presidential Directives are, as one might expect, still classified, so we don’t have a lot other than hearsay and the statements of various spokespeople to go off of. But it would appear safe to say that the U.S. has not depended on LOW since at least 1997, and probably since some point in the 80s. I think it’s likely that the original change was prompted by a combination of near-miss events in the 1970s (e.g. Zbigniew Brzezinski’s infamous 3 A.M. wakeup call on November 9, 1979), plus the maturation of the modern SSBN force into a viable second-strike weapon, which together caused U.S. leaders to question the wisdom of keeping the nuclear deterrent on a hair trigger. As well they probably should have, given the risks.

In fact, being able to lower the proverbial hammer and relax the national trigger finger somewhat is probably the biggest benefit of having an SSBN force. It’s why other nuclear powers, notably the U.K., have basically abandoned ground-based nuclear launch systems in favor of relying exclusively on submarines for deterrence. The U.K., famously, issues “Letters of Last Resort” to their submarine captains, potentially giving them launch authority even in the absence of any external command and control structure — ensuring a retaliatory capability even in the event of complete annihilation of the U.K. itself. While this places a lot of responsibility on the shoulders of a handful of submarine captains, it also relieves the entire U.K. defense establishment from having to plan for and absorb a decapitation attack, and it certainly seems like a better overall plan than automated systems that might be designed to do the same thing.

In the U.S. we’ve never gone as far as the U.K. in terms of delegation of nuclear-launch authority (perhaps because the size of the U.S. nuclear deterrent would mean an unacceptable number of trusted individuals would be required), but it’s been a while since any President has necessarily needed to decide whether to end the world or face unilateral annihilation in a handful of minutes. They would need to potentially decide whether to authorize a U.S. ICBM launch in that very short window of time, but they wouldn’t lose all retailiatory capacity if they chose not to, and it is difficult to imagine — given the possibility and actual past experience with false alarms — that a sane president would authorize a launch before confirmation of an actual attack on U.S. soil.

So why did the “four minute” number resurface at all? That’s a bit of a mystery. It could have just been a debate gambit by Clinton, which is admittedly the simplest explanation, or perhaps the idea of Launch On Warning isn’t completely gone from U.S. strategic policy. This is not implausible, since we still maintain a land-based ICBM force, and the ICBMs are still subject to the first-strike advantage which produced Launch On Warning in the first place.

And rather than debating the debate, which will be a moot point in a very few weeks, the real question we ought to be asking is why we bother to maintain the land-based strategic nuclear ICBM force at all.

Here’s a modest proposal: retire the ICBM force’s strategic nuclear warheads, but retain the missile airframes and other launch infrastructure. Let other interested parties observe the nuclear decommissioning, if they want to, so that there’s no mistaking a future launch of those missiles as a nuclear one. And then use the missiles for non-nuclear Prompt Global Strike or a similar mission (e.g. non-nuclear FOBS, “rod from God” kinetic weapons, or whatever our hearts desire).

It ought to make everyone happy: it’s that many fewer fielded nuclear weapons in the world, it eliminates the most vulnerable part of the nuclear triad and moves us firmly away from LOW, it doesn’t take away any service branch’s sole nuclear capability (the Air Force would retain air-launched strategic capability, as a hedge against future developments making the SSBN force obsolete), and it would trade an expensive and not-especially-useful strategic capability for a much-more-useful tactical capability, and in the long term it could potentially allow the U.S. to draw down overseas-deployed personnel and vulnerable carrier strike groups while retaining rapid global reach.

It makes too much sense to ever actually occur, of course, at least not during an election season.

0 Comments, 0 Trackbacks

[/politics] permalink

Thu, 13 Oct 2016

At some point, Yahoo started sticking a really annoying popup on basically every single Flickr page, if you aren’t logged in with a Yahoo ID. Blocking these popups is reasonably straightforward with uBlock or ABP, but it took me slightly longer than it should have to figure it out.

As usual, here’s the tl;dr version. Add this to your uBlock “My filters”:

! Block annoying Flickr login popups

That’s it. Note that this doesn’t really “block” anything, it’s a CSS hiding rule. For this to work you have to ensure that ‘Cosmetic Filters’ in uBlock / uBlock Origin is enabled.

The slightly-longer story as to why this took more than 10 seconds of my time, is because the default uBlock rule that’s created when you right-click on one of the popups and select ‘Block Element’ doesn’t work well. That’s because Yahoo is embedding a bunch of random characters in the CSS for each one, which changes on each page load. (It’s not clear to me whether this is designed expressly to defeat adblockers / popup blockers or not, but it certainly looks a bit like a blackhat tactic.)

Using the uBlock Origin GUI, you have to Ctrl-click (Cmd-click on a Mac) on the top element hiding rule in order to get a ‘genericized’ version of it that removes the full CSS path, and works across page reloads. I’d never dug into any of the advanced features of uBlock Origin before — it’s always just basically worked out of the box, insofar as I needed it to — so this feature was a nice discovery.

Why, exactly, Yahoo is shoving this annoying popup in front of content on virtually every Flickr page, to every non-logged-in viewer, isn’t clear, although we can certainly speculate: Yahoo is probably desperate at this point to get users to log in. Part of their value as a company hinges on the number of active users they can claim. So each person they hard-sell into logging in is some amount more they’ll probably get whenever somebody steps in and buys them.

As a longtime Flickr user, that end can’t come soon enough. It was always disappointing that Flickr sold out to Yahoo at all; somewhere out there, I believe there’s a slightly-less-shitty parallel universe where Google bought Flickr, and Yahoo bought YouTube, and Flickr’s bright and beautiful site culture was saved just as YouTube’s morass of vitrol and intolerance became Yahoo’s problem to moderate. Sadly, we do not live in that universe. (And, let’s be honest, Google would probably have killed off Flickr years ago, along with everything else in their Graveyard of Good Ideas. See also: Google Reader.)

Perhaps once Yahoo is finally sold and broken up for spare parts, someone will realize that Flickr still has some value and put some effort into it, aside from strip-mining it for logins as Yahoo appears to be doing. A man can dream, anyway.

0 Comments, 0 Trackbacks

[/technology/web] permalink

Wed, 14 Sep 2016

Everyone’s favorite security analyst Bruce Schneier seems to think that somebody is learning how to “take down the Internet” by repeatedly probing key pieces of “basic infrastructure” — exactly what’s being probed isn’t stated, but the smart money is on the DNS root servers. Naturally, who is doing this is left unsaid as well, although Schneier does at least hazard the obvious guess at China and Russia.

If this is true, it’s a seemingly sharp escalation towards something that might legitimately be called ‘cyberwarfare’, as opposed to simply spying-using-computers, which is most of what gets lumped in under that label today. Though, it’s not clear exactly why a state-level actor would want to crash DNS; it’s arguably not really “taking down the Internet”, although it would mess up a lot of stuff for a while. Even if you took down the root DNS servers, it wouldn’t stop IP packets from being routed around (the IP network itself is pretty resilient), and operators could pretty quickly unplug their caching DNS resolvers and let them run independently, restoring service to their users. You could create a mess for a while, but it wouldn’t be crippling in the long term.

Except perhaps as one component of a full-spectrum, physical-world attack, it doesn’t make a ton of sense to disrupt a country’s DNS resolvers for a few hours. And Russia and China don’t seem likely to actually attack the U.S. anytime soon; relations with both countries seem to be getting worse over time, but they’re not shooting-war bad yet. So why do it?

The only reason that comes to mind is that it’s less ‘preparation’ than ‘demonstration’. It’s muscle flexing on somebody’s part, and not particularly subtle flexing at that. The intended recipient of the message being sent may not even be the U.S., but some third party: “see what we can do to the U.S., and imagine what we can do to you”.

Or perhaps the eventual goal is to cover for a physical-world attack, but not against the U.S. (where it would probably result in the near-instant nuclear annihilation of everyone concerned). Perhaps the idea is to use a network attack on the U.S. as a distraction, while something else happens in the real world? Grabbing eastern Ukraine, or Taiwan, just as ideas.

Though an attack on the DNS root servers would be inconvenient in the short run, I am not sure that in the long run that it would be the worst thing to happen to the network as an organism: DNS is a known weakness of the global Internet already, one that desperately needs a fix but where there’s not enough motivation to get everyone moving together. An attack would doubtless provide that motivation, and be a one-shot weapon in the process.

Update: This article from back in April, published by the ‘Internet Governance Project’, mentions a Chinese-backed effort to weaken US control over the root DNS, either by creating additional root servers or by potentially moving to a split root. So either the probing or a future actual disruption of DNS could be designed to further this agenda.

In 2014, [Paul] Vixie worked closely with the state-owned registry of China (CNNIC) to promote a new IETF standard that would allow the number of authoritative root servers to increase beyond the current limit of 13. As a matter of technical scalability, that may be a good idea. The problem is its linkage to a country that has long shown a more than passing interest in a sovereign Internet, and in modifying the DNS to help bring about sovereign control of the Internet. For many years, China has wanted its “own” root server. The proposal was not adopted by IETF, and its failure there seems to have prompted the formation and continued work of the YETI-DNS project.

The YETI-DNS project appears, at the moment, to be defunct. Still, China would seem to have the most to gain by making the current U.S.-based root DNS system seem fragile, given the stated goal of obtaining their own root servers.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 11 Sep 2016

If you can only bear to read one 9/11 retrospective or tribute piece this year, I’d humbly suggest — if you are not already familiar — reading the story of Rick Rescorla, one of the many heroes of the WTC evacuation.

The Real Heroes Are Dead, written by James B. Stewart in The New Yorker, from February 2002, is worth the read.

0 Comments, 0 Trackbacks

[/other] permalink

Fri, 09 Sep 2016

This was originally posted to Hacker News as a comment in a discussion about “microhousing”. The question I was responding to was:

What is NIMBY for microhousing based on?

This is an ongoing argument in Northern Virginia (which is not quite as expensive as SF / Seattle / NYC, but probably only one cost tier below that) over micro-housing, typically in the form of backyard apartments and the subdivision of single-family homes into boarding houses, and the major arguments are basically the same issues that apply to all “just build more housing, stupid” proposals.

Basically, if you suddenly build a lot more housing, you’d start to strain the infrastructure of the community in other ways. That strain is really, really unpleasant to other people who share the infrastructure, and so current residents — who are often already feeling like things are strained and getting worse over time — would rather avoid making things worse. The easiest way to avoid making things worse is just to control the number of residents, and the easiest way to do that is to control the amount of housing: If you don’t live here, you’re probably not using the infrastructure. QED.

In many ways, building more housing is the easiest problem to solve when it comes to urban infrastructure. Providing a heated place out of the rain just isn’t that hard, compared to (say) transportation or schools or figuring out economically sustainable economic balance.

Existing residents are probably (and reasonably) suspicious that once a bunch of tiny apartments are air-dropped in, and then a bunch of people move in to fill them up, that there won’t be any solution to any of the knock-on problems that will inevitably result — parking, traffic, school overcrowding, tax-base changes, stress to physical infrastructure like gas/water/sewer/electric systems — until those systems become untenably broken. I mean, I can’t speak to Seattle, but those things are already an increasingly-severe problem today, with the current number of residents, in my area, and people don’t have much faith in government’s ability to fix them; so the idea that the situation will be improved once everyone installs a couple of backyard apartments is ridiculous. (And then there are questions like: how are these backyard apartments going to be taxed? Are people who move in really going to pay more in taxes than they consume in services and infrastructure impact, or is this going to externalize costs via taxes on everyone else? There’s no clear answer to these questions, and people are reluctant to become the test case.)

If you want more housing, you need more infrastructure. If you want more infrastructure, either you need a different funding model or you need better government and more trust in that government. Our government is largely (perceived to be) broken, and public infrastructure is (perceived to be) broken or breaking, and so the unsurprising result is that nobody wants to build more housing and add more strain to a system that’s well beyond its design capacity anyway.

That’s why there’s so much opposition to new housing construction, particularly to ideas that look just at ways to provide more housing without doing anything else. You’re always going to get a lot of opposition to “just build housing” proposals unless they’re part of a compelling plan to actually build a community around that new housing.

0 Comments, 0 Trackbacks

[/politics] permalink

Fri, 26 Aug 2016

Bruce Schneier has a new article about the NSA’s basically-all-but-confirmed stash of ‘zero day’ vulnerabilities on his blog, and it’s very solid, in typical Bruce Schneier fashion.

The NSA Is Hoarding Vulnerabilities

I won’t really try to recap it here, because it’s already about as concise as one can be about the issue. However, there is one thing in his article that I find myself mulling over, which is his suggestion that we should break up the NSA:

And as long as I’m dreaming, we really need to separate our nation’s intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency’s mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS’s mission.

Far be it from me to second-guess Schneier on most topics, but that just doesn’t seem to make a whole lot of sense. If the key problem is that vulnerabilities are being hoarded for offensive use rather than being shared with manufacturers (defensive use), it doesn’t seem like splitting those two missions into separate agencies is going to improve things. And the predictable result is that we’re then going to have two separate agencies working against one another, doing essentially the same research, looking for the same underlying vulnerabilities, for different aims. That seems… somewhat inefficient.

And if history is any guide, the U.S. will probably spend more on offensive armaments than on defense. Contrary to the Department of Defense’s name, since the end of WWII we have based our national-defense posture largely on a policy of force projection and deterrence-through-force, and I am highly skeptical that, as a nation, we’re going to suddenly take a different tack when it comes to “cyberwarfare” / IT security. The tension between offense and defense isn’t unique to IT: it exists in lots of other places, from ICBMs to vehicle armor, and in most cases U.S. doctrine emphasizes the offensive, force-projective capability. This is practically a defining element of U.S. strategic doctrine over the past 60 years.

So the net result of Schneier’s proposal would probably be to take the gloves off the NSA: relieve it of the defensive mission completely, giving it to DHS — which hardly seems capable of taking on a robust cyberdefense role, but let’s ignore that for the sake of polite discussion — but almost certainly emerge with its funding and offensive role intact. (Or even if there was a temporary shift in funding, since our national adversaries have, and apparently make use of, offensive cyberwarfare capabilities, it would only be a matter of time until we felt a ‘cyber gap’ and turned on the funding tap again.) This doesn’t seem like a net win from a defense standpoint.

I’ll go further, admittedly speculation: I suspect that the package of vulnerabilities (dating from 2013) that are currently being “auctioned” by the group calling themselves the Shadow Brokers probably owe their nondisclosure to some form of internal firewalling within NSA as an organization. That is to say, the sort of offensive/defensive separation that Schneier is seemingly proposing at a national level probably exists within NSA already and is related to why the zero-day vulnerabilities weren’t disclosed. We’ll probably never know for sure, but it wouldn’t surprise me if someone was hoarding the vulnerabilities within or for a particular team or group, perhaps in order to prevent them from being subject to an “equities review” process that might decide they were better off being disclosed.

What we need is more communication, not less, and we need to make the communication flow in a direction that leads to public disclosure and vulnerability remediation in a timely fashion, while also realistically acknowledging the demand for offensive capacity. Splitting up the NSA wouldn’t help that.

However, in the spirit of “modest proposals”, a change in leadership structure might: currently, the Director of the NSA is also the Commander of the U.S. Cyber Command and Chief of the Central Security Service. It’s not necessarily clear to me that having all those roles, two-thirds of which are military and thus tend to lean ‘offensive’ rather than ‘defensive’, reside in the same person is ideal, and perhaps some thought should be given to having the NSA Director come from outside the military, if the goal is to push the offensive/defensive pendulum back in the opposite direction.

0 Comments, 0 Trackbacks

[/politics] permalink

The other day I discovered an interesting Python behavior that I somehow had managed not to hit before — in fairness, I use Python mostly for scripting and automation, not ‘real’ software development, but I still thought I understood the basics reasonably well.

Can you spot the problem? The following is designed to remove words from a list if they are below a certain number of characters, specified by args.minlength:

for w in words:
    if len(w) < int(args.minlength):

The impending misbehavior, if you didn’t catch it by this point, is not necessarily obvious. It won’t barf an error at you, and you can actually get it to pass a trivial test, depending on how the test data is configured. But on a real dataset, you’ll end up with lots of words shorter than args.minlength left in words after you (thought) you iterated through and cleaned them!

(If you want to play with this on your own, try running the above loop against the contents of your personal iSpell dictionary — typically ~/.ispell_english on Unix/Linux — or some other word list. The defect will quickly become apparent.)

A good description to the problem, along with several solutions, is of course found on Stack Overflow. But to save you the click: the problem is iterating over a mutable object, such as a list, and then modifying the list (e.g. by removing items) inside the loop. Per the Python docs, you shouldn’t do that:

If you need to modify the sequence you are iterating over while inside the loop (for example to duplicate selected items), it is recommended that you first make a copy. Iterating over a sequence does not implicitly make a copy.

The solution is easy:

for w in words[:]:
    if len(w) < int(args.minlength):

Adding the slice notation causes Python to iterate over a copy of the list (pre-modification), which is what you actually want most of the time, and then you’re free to modify the actual list all you want from inside the loop. There are lots of other possible solutions if you don’t like the slice notation, but that one seems pretty elegant (and it’s what’s recommended in the Python docs so it’s presumably what someone else reading your code ought to expect).

I’d seen the for item in list[:]: construct in sample code before, but the exact nature of the bugs it prevents hadn’t been clear to me before. Perhaps this will be enlightening to someone else as well.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Tue, 23 Aug 2016

About fifty pages into John Bruce Medaris’s 1960 autobiography Countdown for Decision, there is an unsourced quote attributed to Col. C.G. Patterson, who in 1944 was in charge of Anti-Aircraft Artillery for the U.S. First Army, outlining the concept of a “technological casualty”:

“If a weapon costs more to build, in money, materials, and manpower, than it costs the enemy to repair the damage the weapon causes, the user has suffered a technological casualty. In any long-drawn-out struggle this might be the margin between victory and defeat.” 1

As far as I can tell, the term “technological casualty” never passed into general usage with that meaning, which is unfortunate. And although sources do confirm that Col. Patterson existed and by all accounts served admirably as the commander of air defense artillery for First Army in 1944, there doesn’t appear to be much record outside of Medaris’ book of the quote. Still, credit where it is most likely due; if ever a shorthand name for this idea is required, I might humbly suggest “Patterson’s Dictum”. (It also sounds good.)

I suspect, given Patterson’s role at the time, that the original context of the quote had to do with offensive or defensive air capability. Perhaps it referred to the attrition of German capability that was at that point ongoing. In Countdown, Medaris discusses it in the context of the V-2, which probably consumed more German war resources to create than they destroyed of Allied ones. But it is certainly applicable more broadly.

On its face, Patterson’s statement assumes a sort of attritative, clash-of-civilizations, total-commitment warfare, where all available resources of one side are stacked against all available resources of the other. One might contend that it doesn’t seem to have much applicability in the age of asymmetric warfare, now that we have a variety of examples of conflicts ending in a victory — in the classic Clausewitzian political sense — by parties who never possessed any sort of absolute advantage in money, materials, or manpower.

But I would counter that even in the case of a modern asymmetric war, or realpolitik-fueled ‘brushfire’ conflicts with limited aims, the fundamental calculus of war still exists, it just isn’t as straightforward. Beneath all the additional terms that get added to the equation is the essential fact that defeat is always possible if victory proves too expensive. Limited war doesn’t require you outspend your adversary’s entire society, only their ‘conflict budget’: their willingness to expend resources in that particular conflict.

Which makes Patterson’s point quite significant: if a modern weapons system can’t subtract as much from an adversary’s ‘conflict budget’ — either through actual destructive power, deterrence, or some other effect — as it subtracts from ours in order to field it (including the risk of loss), then it is essentially a casualty before it ever arrives.

1: Countdown for Decision (1960 ed.), page 51.

0 Comments, 0 Trackbacks

[/politics] permalink

Mon, 22 Aug 2016

Ars Technica has a nice article, published earlier this month, on the short life of the Digital Compact Cassette format, one of several attempts to replace the venerable analog cassette tape with a digital version, prior to its eventual demise in the download era.

At risk of dating myself, I remember the (very brief) rise and (anticlimactic) fall of the Digital Compact Cassette, although I was a bit poor to be in the target market of early adopters and hi-fi-philes that the first decks were targeted to. And while the Ars article is decent, it ignores the elephant in the room that contributed mightily to DCC’s demise: DRM.

DCC was burdened by a DRM system called SCMS, also present in the consumer version of DAT. This inclusion was not the fault of Philips or Matsushita (later Panasonic), who designed DCC, but a result of an odious RIAA-backed law passed in 1992, the Audio Home Recording Act, which mandated it in all “digital audio recording device[s]”.

It is telling that of the variety of formats which were encumbered by SCMS, exactly zero of them have ever succeeded in the marketplace in a way that threatened the dominant formats. The AHRA was (and remains, de jure, because it’s still out there on the books, a piece of legal “unexploded ordnance” waiting for someone to step on it) the RIAA’s most potent and successful weapon in terms of suppressing technological advancement and maintaining the status quo throughout the 1990s.

Had it not been for the AHRA and SCMS, I think it’s likely that US consumers might have had not one but two alternative formats for digital music besides the CD, and perhaps three: consumer DAT, DCC, and MiniDisc. Of these, DAT is probably the best format from a pure-technology perspective — it squeezes more data into a smaller physical space than the other two, eliminating the need for lossy audio compression — but DAT decks are mechanically complex, owing to their helical scan system, and the smallest portable DATs never got down to Walkman size. DCC, on the other hand, used a more robust linear tape system, and perhaps most importantly it was compatible with analog cassette tapes. I think there is a very good chance that it could have won the battle, if the combatants had been given a chance to take the field.

But the AHRA and SCMS scheme conspired to make both consumer-grade DAT and DCC unappealing. Unlike today, where users have been slowly conditioned to accept that their devices will oppose them at every opportunity in the service of corporations and their revenue streams, audio enthusiasts from the analog era were understandably hostile to the idea that their gear might stop them from doing something it was otherwise quite physically capable of doing, like dubbing from one tape to another, or from a CD to a tape, in the digital domain. And a tax on blank media just made the price premium for digital, as opposed to analog, that much higher. If you are only allowed to make a single generation of copies due to SCMS, and if you’re going to pay extra for the digital media due to the AHRA, why not just get a nice analog deck with Dolby C or DBX Type 2 noise reduction, and spend the savings on a boatload of high-quality Type IV metal cassettes?

That was the question that I remember asking myself anyway, at the time. I never ended up buying a DCC deck, and like most of the world continued listening to LPs, CDs, and analog cassettes right up until cheap computer-based CD-Rs and then MP3 files dragged the world of recorded music fully into the digital age, and out of the shadow of the AHRA.

0 Comments, 0 Trackbacks

[/technology] permalink

Tue, 16 Aug 2016

Bloomberg’s Matt Levine has a great article, published today, which begins with a discussion of the apparently-hollow shell company “Neuromama” (OTC: NERO), which — cue shocked face — is probably not in reality a $35 billion USD company, but quickly moves into a delightful discussion of insider trading, money market rates, an “underpants gnomes”-worthy business plan, and the dysfunction of the Commodity Futures Trading Commission. There’s even a bonus mention of Uber shares trading on the secondary market, which is something I’ve written about before. Definitely worth a read:

Heavy Ion Fusion and Insider Trading

If you only read one section of it, the part on “When is insider trading a crime?” is, in my humble opinion, probably the best. (Memo to self: next time there’s a big insider-trading scandal, be sure to come back to this.) But really, it’s a good article. Okay, there’s a bit too much gloating about those stupid regulators and their stupid regulations for someone who isn’t a hedge fund manager to get excited about, but it’s fucking Bloomberg, that’s probably a contractual obligation to get printed there. Also it’s Congress’ fault anyway, as usual.

0 Comments, 0 Trackbacks

[/finance] permalink