Kadin2048's Weblog
JulAug Sep
Oct Nov Dec


Wed, 26 Sep 2012

I recently had a hardware failure, and decided to take the opportunity to upgrade my aging home server from Ubuntu ‘Dapper Drake’ to Scientific Linux. The reasons for my move away from Ubuntu are an article unto themselves, but it boils down to what I see as an increasing contempt for existing users (and pointless pursuit of hypothetical tablet users — everybody wants to try their hand at being Apple these days, apparently unaware that the role has been filled), combined with — and this is significantly more important — the fact that I have been using RPM-based distros far more often at work than Debian/APT-based ones, despite the many advantages of the latter. Anyway, so I decided to switch the server to SL.

The actual migration process wasn’t pretty and involved a close call with a failing hard drive which I won’t bore you with. The basic process was to preserve the /home partition while tossing everything else. This wasn’t too hard, since SL uses the same Anaconda installer as Fedora and many other distros. I just told it to use my root partition as /, my home partition as /home, etc.

And then I rebooted into my new machine. And seemingly everything broke.

The first hint was on login: I got a helpful message informing me that my home directory (e.g. /home/myusername) didn’t exist. Which was interesting, because once logged in I could easily cd to that directory, which plainly did exist on the filesystem.

The next issue was with ssh: although I could connect via ssh as my normal user, it wasn’t possible to use public key auth, based on the authorized_keys file in my home directory. It was as though the ssh process wasn’t able to access my home directory…

As it turned out, the culprit was SELinux. Because the “source” operating system that I was migrating from didn’t have SELinux enabled, and the “destination” one did, there weren’t proper ‘security contexts’ (extended attributes) on the files stored on /home.

The solution was pretty trivial: I had to run # restorecon -R -v /home (note as root!), which took a few minutes, and then everything worked as expected. This was something I only discovered after much searching, on this forum posting regarding a Fedora 12 install. I’m noting it here in part so that perhaps other people in the future can find it more easily. And because, unfortunately, there are forums filled with people experiencing the same problem and receiving terrible advice that they need to reformat /home (in effect, throw away all their data) in order to upgrade or change distros.

Bottom line: if you are running into weird issues on login (console or SSH) after an upgrade from a non-SELinux distro to a SELinux-enabled one, try rebuilding the security context before taking any drastic steps.

0 Comments, 0 Trackbacks

[/technology] permalink

Wed, 01 Aug 2012

Lockheed is apparently working on a next-generation carrier based unmanned fighter aircraft, the “Sea Ghost.” At least, they are “working” on it in the sense that they paid some graphic designer to make some CGI glamour shots of something that might be a UAV, sitting on the deck of what is presumably an aircraft carrier. As press releases go it’s a little underwhelming, but whatever.

From the rendering, it appears that the Sea Ghost is a flying wing design, which is interesting for a number of reasons. Flying wings are almost as old as aviation in general, but have proved — with a few notable exceptions — to be largely impractical, despite having some nice advantages on paper over the traditional fuselage-plus-wings monoplane design. It’s one of those ideas that’s just so good that, despite a sobering list of failures, it just won’t die.

One of the big problems with flying wings is yaw control. Since they lack a tail and traditional rudder, getting the aircraft to rotate on the horizontal plane is difficult. Typically — in the case of the B2, anyway — this is accomplished by careful manipulation of the ailerons to create drag on one wing, while simultaneously compensating on the other side in order to control roll. This is, to put it mildly, a neat trick, and it’s probably the only reason why the B2 exists as a production aircraft (albeit a really expensive one).

I suspect that the Sea Ghost is built the same way, if only because it’s been proven to work and the Lockheed rendering doesn’t show any other vertical stabilizer surfaces that would do the job.

But a thought occured to me: if you can make a drone small and light enough (actually, a small enough moment of inertia), you don’t need to do the B2 aileron trick at all. You could maneuver it like a satellite. That is, by using a gyroscope not simply to sense the aircraft’s change in attitude, but actually to make it move about the gyroscope. Simply: you spin up the gyro, and then use servos to tilt the gimbal that the gyro sits in. The result is a force on the airframe opposite the direction in which the gyro’s axis is moved. With multiple gyros, you could potentially have roll, pitch, and yaw control.

This isn’t practical for most aircraft — aside from helicopters which do it naturally to a degree — because they have too much inertia, and the external forces acting against them are too large; the gyroscope you’d need to provide any sort of useful maneuvering ability would either make the plane too heavy to fly, or take up space needed for other things (e.g. bombs, in the case of most flying wing aircraft). And that might still be the case with the Sea Ghost, but it’s not necessarily the case with all drones.

The smaller, and more importantly lighter, the aircraft the easier it would be to maneuver with a gyroscope rather than external aerodynamic control surfaces. Once you remove the requirement to carry a person, aircraft can be quite small.

It wouldn’t surprise me if you could maneuver a small hobbyist aircraft with a surplus artificial horizon gyro. To my knowledge, nobody has done this yet, but it seems like a pretty straightforward merger of existing technology. You’d need a bunch of additional MEMS gyros, which are lightweight, to sense the change in attitude and stop and start the manuevering gyro’s movement, but there’s nothing that seems like an obvious deal-breaker.

The advantage of such a system would be that there’s no change to the outside skin of the aircraft in order to make it maneuver (within the limits of the force provided by the gyro). That would mean a lower radar cross section, and potentially less complexity and weight due to fewer moving parts in the wings.

Just one of the many intriguing possibilities you come up with, when you take 80 kilos of human meat out of the list of requirements.

Almost enough to get me back into model airplanes again.

0 Comments, 0 Trackbacks

[/technology] permalink

Sun, 08 Apr 2012

For no particularly good reason, I decided I wanted to play around with IBM VM earlier this weekend. Although this would seem on its face to be fairly difficult — VM/370 is a mainframe OS, after all — thanks to the Hercules emulator, you can get it running on either Windows or Linux fairly easily.

Unfortunately, many of the instructions I found online were either geared towards people having trouble compiling Hercules from source (which I avoided thanks to Ubuntu’s binaries), or assume a lot of pre-existing VM/370 knowledge, or are woefully out of date. So here are just a couple of notes should anyone else be interested in playing around with a fake mainframe…

Some notes about my environment:

  • Dual-core AMD Athlon64 2GHz
  • 1 GB RAM (yes, I know, it needs more memory)
  • Ubuntu 10.04 LTS, aka Lucid

Ubuntu Lucid has a good binary version of Hercules in the repositories. So no compilation is required, at least not for any of the basic features that I was initially interested in. A quick apt-get hercules and apt-get x3270 were the only necessities.

In general, I followed the instructions at gunkies.org: Installing VM/370 on Hercules. However, there were a few differences. The guide is geared towards someone running Hercules on Windows, not Linux.

  • You do not need to set everything up in the same location as the Hercules binaries, as the guide seems to indicate. I created a vm370 directory in my user home, and it worked fine as a place to set up the various archives and DASD files (virtual machine drives).

  • The guide takes you through sequences where you boot the emulated machine, load a ‘tape’, reboot, then load the other ‘tape’. When I did this, the second load didn’t work (indefinite hang until I quit the virtual system from the Hercules console). But after examining the DASD files, it seemed like the second tape had loaded anyway, but the timestamp indicated that it had loaded at the same time as the first tape. I think that they both loaded one after the other in the first boot cycle — hard to tell for sure at this point, but don’t be too concerned if things don’t seem to work as described; I got a working system anyway. Update: The instructions work as described; I had a badly set-up DASD file that was causing an error, which did not show itself until later when I logged in and tried to start CMS.

  • To get a 3270 connection, I had to connect to on port 3270; trying to connect to “localhost” didn’t work. I assume this is just a result of how Hercules is set up to listen, but it caused me to waste some time.

  • The tutorial tells you to start Hercules, then connect your 3270 emulator to the virtual system, then run the ipl command; the expected result is to see the loader on the 3270. For me, this didn’t work… the 3270 display just hung at the Hercules splash screen. To interact with the loader, I had to disconnect and reconnect the 3270 emulator. So, rather than starting Hercules, connecting the 3270, then ipl-ing, it seems easier to start Hercules, ipl, then connect and operate the loader.

Of course, when you get through the whole procedure, what you’ll have is a bare installation of VM/370… without documentation (or extensive previous experience), you can’t do a whole lot. That’s what I’m figuring out now. Perhaps it’ll be the subject of a future entry.

0 Comments, 0 Trackbacks

[/technology/software] permalink

Fri, 16 Mar 2012

After switching from my venerable Nexus One to a new Samsung Galaxy SII (SGS2) from T-Mobile, I was intrigued to discover that it has a fairly neat WiFi calling ability. This feature lets the phone use a wireless IP access point to place calls, in lieu of the normal cellular data network. On one hand it’s a bit of a ripoff — even though you’re using your own Internet rather than T-Mobile’s valuable spectrum, they still use up your minutes at the same rate; however, it’s nice if you travel to a place with crummy cell service but decent wireless Internet.

When the feature is enabled, the phone will switch preferentially to WiFi for all calls, once it pairs to an access point. (It can be disabled if you’d prefer it to not do this.) There are still some very rough edges: the biggest issue is that there’s no handoff, so if you place a call over WiFi and then walk out of range of the AP, the call drops. Whoops.

I was curious how the calls were actually handled on the wire, and in particular how secure things were. To this end, I decided to run a quick Wireshark analysis on the phone, while it was connected to my home WiFi AP.

The setup for this is pretty trivial, and out of scope of this entry; basically you just need to find a way to get the packets going to and coming from the phone to be copied to a machine where you can run Wireshark or tcpdump. You can do this with an Ethernet hub (the old-school method), via the router’s configuration, or even via ARP spoofing.

With Wireshark running and looking at the phone’s traffic, I performed a few routine tasks to see what leaked. The tl;dr version of all of this? In general, Android apps were very good about using TLS. There wasn’t a ton of leakage to a would-be interceptor.

Just for background: Gmail and Twitter both kept almost everything (except for a few generic logo images in Twitter’s case) wrapped in TLS.

Facebook kept pretty much everything encrypted, except for other users’ profile images, which it sent in the clear. This isn’t a huge issue, but it does represent minor leakage; the reason for this seems to be that Facebook keeps the images cached on a CDN, and the CDN servers don’t do SSL, apparently. I’m not sure what sort of nastiness or attacks this opens up, if any (perhaps social engineering stuff, if a motivated attacker could recover your friends list), but it’s worth noting and keeping in mind.

I next confirmed that text messages (SMSes) aren’t sent in the clear. They are not, although I’m not 100% sure they’re even sent over the data connection — it’s hard to tell, among the SIP keepalives, whether a SMS went out via the WiFi connection, or if the phone used the actual cell-data connection instead. Sometime when I’m in a location without any GSM coverage but with WiFi, I’ll have to test it and confirm.

Last, I made a quick call. This is what I was most interested in, since encrypted SIP is surprisingly uncommon — most corporate telephony systems don’t support it, at least not that I’ve seen or worked with. It wouldn’t have surprised me much at all if the SIP connection itself was all in the clear. However, that doesn’t seem to be the case. The call begins with a sip-tls handshake, and then there are lots of UDP packets, all presumably encrypted with a session key negotiated during the handshake. At any rate, Wireshark’s built-in analysis tools weren’t able to recover anything, so calls are not script-kiddie vulnerable. Still, I’m curious about what sort of certificate validation is done on the client side, and how the phone will react to forged SSL certs or attempts by a MITM to downgrade the connection.

Certainly lots of room for further experiments, but overall I’m relieved to see that the implementation isn’t obviously insecure or vulnerable to trivial packet sniffing.

0 Comments, 0 Trackbacks

[/technology/mobile] permalink