unixronin: Galen the technomage, from Babylon 5: Crusade (Default)

December 2012


Most Popular Tags

Expand Cut Tags

No cut tags
unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Tuesday, September 6th, 2011 06:49 pm

A while back, Microsoft dumped a whole lot of outdated DeLorme mapping packages via Amazon for something like $15 each.

Well, sure.  The MAPS go out of date.  But the included GPS receiver doesn't, and the entire principle of GPS relies on having incredibly accurate time sources.  This time source information is available both in the GPS NMEA data stream and via PPS.  Obvious application is obvious.

Well, I've been studying the issue from the Solaris 10 point of view for some time, but eventually concluded that the driver support just doesn't seem to be there.  However, it's a different story on Linux.

Short story?  Using the GPS receiver from the MS package, I now have my own local stratum 0 timeserver.  Subject to system latency (which should be low on this machine, with six 3.2GHz cores), it should be accurate to about ±1µs. With port baud rate turned up to 115200, ntpd reports zero jitter.

...Oh, the mapping software? That went in the trash.  Duh.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Monday, July 18th, 2011 04:54 pm

This C|Net article asks whether Netflix is killing DVDs "like Apple killed floppies".  And paints it as a good thing.

Well, first of all, it wasn't Apple that killed floppies.  Floppies stayed around long after Apple stopped including floppy drives.  The pretensions of the Mac faithful notwithstanding, Apple is far too much a niche player — especially in business — to have the power to singlehandedly kill the floppy disk.  What killed the floppy was the rapidly increasing size of data such as digital camera images etc (and the increasing penetration of new forms of digital data, notably MP3 music), rapidly dropping prices on optical media, and increasingly universal availability of write/rewrite capable optical drives combined with improving software integration that made it possible for just anyone to use them.  Apple dropping the 3.5" floppy drive from the blueberry iMac was, in the larger scheme of things, a complete non-event.  When your mother could drop a blank disc in the DVD burner and drag a video of the grandkids onto it from her camcorder, and have it just work ... THAT'S when the floppy disc's days became numbered.

But let's talk about his premise that Netflix is killing the DVD, and that it's a good thing.

Well, sure.  If Netflix isd able to kill the DVD, it'll be a great thing ... for Netflix.  And for the studios; it'll get them closer to their dream of you having to pay for your entertainment every time you watch or listen to it.  But for anyone else?

Oh, no, it won't be good for the rest of us.  Now, we can have our physical media.  We can buy the disc once, and watch it whenever we want. Even when our ISP is having technical problems, or there's an outage somewhere and the 'net is crawling.  We can buy it, rip it, and put it on a portable device to watch it or listen to it when we're away from a network connection.

But in an all-streaming world?

Oh, yes, Hollywood and Netflix would love that.  They'd get to charge you for every time you watch the movie.  Every time you listen to the song.  What, you want to watch that movie at your vacation cabin up in the mountains, but you have no broadband up there?  Too bad.  Want to listen to music while you work, but you work in a steel-framed building that jams your 4G connectivity, and your employer doesn't allow music streaming using company resources?  Tough. Want movies in the back seat to keep the kids quiet on the seven-hour drive to Grandma's place out in the country?  Sucks to be you.  Want to watch some niche art film from France that's never sold enough copies to make it worth Netflix's trouble to add it to their catalog?  Serves you right for watching that artsy-fartsy foreign crap. Shut up and stream Transformers 7 in 5D.

Choice.  Now that's good.  A choice of media.  A choice of how you access what you want to be entertained by.  Use the one that works for you.  Stream the movie you figure you'll only bother to watch once.  Buy a physical copy of the one you rewatch about twice a year, or the one the kids watch about twice a month.

But if streaming is all there is?

Come on.  We're talking about Hollywood.  If streaming is the only way you can watch movies any more, how long do you really think it'll be before it costs as much to stream the movie once as it costs to buy the disc now?

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Saturday, July 16th, 2011 03:32 pm

I've been playing more with the voice-command navigation, and it's very good.  It got "Parker's Maple Barn, New Hampshire" on the first try, and nailed it.  Likewise "Best Buy Nashua New Hampshire" and "Best Buy Concord New Hampshire".  (I dropped off four obsolete computers and two monitors for recycling today.  Three items per household per day means three items per STORE per household per day, right....?  Since they don't record any information about who's dropping them off...)

The GPS receiver is a power-hungry little sucker, though.  I bought a USB car charger that fits flush into the 12V socket and effectively turns it into two USB charging ports — one of these, which claims to be able to deliver 1A per port — and it was not able to significantly charge the battery between Nashua and Concord with the GPS enabled. After leaving Concord, though, when I turned the GPS off, it managed to charge the battery from 10% to 70% by the time I got home.  This is telling me that the GPS receiver and the Google navigation app together are consuming pretty much the entire output of the charger.  (However, the phone didn't actually die between Nashua and Concord, which suggests that if it starts out charged, the charger will be able to keep it charged with GPS navigation running.  I totally need a more compact cable, though.  This one looks like just the thing.)

I have used, so far, 0.025GB of data in the course of probably three to four hours of total GPS navigation use.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Friday, July 15th, 2011 03:11 pm

I've had it four full days now, and I have to say, it's pretty good. I'm not convinced yet it'll make as good a phone, in terms of voice clarity, as the RAZR2, because the open RAZR is ... well ... more phone-shaped.  I suspect the Droid, like most of not all modern smartphones, will be of limited use as an actual phone without a headset.

Well, that's why I had the foresight to buy a headset with it. An inexpensive, corded one, for now; I really don't use a cell phone very often.  Seriously, we're talking in the region of six hundred minutes or less per year here.  I expect I'll be using the headset whenever possible, and perhaps may eventually get a better headset than the minimalist wired earbuds-plus-inline-mic one I bought to start with.

The screen is nice; large, bright and sharp.  The physical keyboard is pretty good, considerably better than the keyboard on the HTC Merge, previously the holder of the Best Keyboard On A Phone title.  You can actually realistically type on it, and with a pretty low error rate.  (No "damn you, autocorrect".)  Battery life so far seems promising; it looks like I should be able to expect 2+ days from the stock 1450mAh battery, in normal use, and probably about three days from the [optional] 1930mAh extended battery.  Using it for navigation is harder on the battery; the Google navigation works great, but the GPS receiver is power-hungry. Of course, if using it for navigation, one can use the optional windshield mount and plug it into a 12V USB charger, which you'd probably want to do anyway because if using it for navigation without a human navigator/co-driver you'd (a) need it mounted, and (b) need to turn off screen timeout.  I did have to hard-powercycle the phone (power off, battery out for ten seconds) to get the GPS receiver online for the first time.

As for charging, when connected via USB, Gentoo Linux sees it without any hesitation as a 12GB USB-storage device (with about 1.75GB free) and it happily charges, unlike the Motorola RAZR2 it's replacing (which neither mounts as storage nor charges from a Linux box).  There are actually three charging options — direct USB connection, included wall-socket USB charger, or an optional inductive charging pad and back cover.  The inductive charging back will accommodate the extended battery.

The Droid3 comes with the usual Verizon V-cast applications and a bunch of preinstalled apps, many of which I don't give a crap about (like for instance some Mobile NFL thing). Unfortunately few of them are uninstallable.  I uninstalled the WGA golf game immediately (puh-leeze!), and would have ditched the Mobile NFL app as well if I could (I mean, me?  NFL?  Seriously?), but the only other preinstalled apps that appear to be uninstallable are the Youtube app and 'Nova', which has nothing to do with the PBS documentary series but instead appears to be a trial version of a Halo-alike FPS game.  (I uninstalled it too.  I have no desire to try to play anything remotely FPS-ish on a screen this tiny, particularly through a phone touchscreen interface.)  So far, I've installed a GasBuddy app and ColorNote, a notepad app with a reasonably well thought-out checklist feature.

The built-in cameras?  Not bad.  Here's a sample from the main (back) camera (quarter scale, click it for full size):

And here's the front camera, intended for videoconferencing, full size:

(Trust me, it's not the camera's fault.)

Things I'd change?  Well, I wish the corporate sync didn't force a screenlock password on me.  I prefer to choose for myself when I lock it, thank you.  And it'd be nice if there was a display timeout choice between two and ten minutes.  A five-minute option would be good.  Getting the back cover off is a bit of a pain in the ass, but you shouldn't need to do that often.

Oh, hey, I know what I forgot to talk about:  The voice recognition.  Specifically, voice-recognition navigation mode, which was the first thing I tried voice command on.  Untrained, it got "Nashua, New Hampshire" on the first try.  Later, we tried "Go home".  It got that in one try as well, but didn't know what to do with it.  Our street address, it fluffed on the first attempt, but got it perfectly when repeated slightly slower. No doubt it will do even better once I actually go through and train it.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Wednesday, July 6th, 2011 05:19 pm

Soliciting the wisdom of the collective here.

First, the reasons:

Work issued me a crackberry when I was hired, and I &$@^#^%@%#@ HATE the *@$()*&$#^%^@# thing.  It can reduce me to frustrated rage in mere minutes.  It's an utter mystery to me how in the name of Nyarlathotep Blackberry ever became a commercial success.

I have an alternative.  Work wants me to have a smartphone so they can reach me via email.  But it doesn't HAVE to be a crackberry.  If I buy my own Android phone, on Verizon's network, work will pay for the Verizon service for as long as I work there.

However, if I'm going to buy and Android phone, I want one with a good physical keyboard.  (One of the most frustrating things about the Infernal Device is its almost unusably tiny keys.  It's all but impossible to type on.)  RIGHT NOW, the best physical keyboard on an Android phone is reportedly the HTC Merge.

I've handled one, and it's ... not bad.  But I understand the Merge is a niche phone with limited availability, largely due to unpopularity of the decision to tie it to Bing for search and location instead of Google.  (Have you ever used Bing? It's #%*(&$^! awful beyond words, even when it's filing the serial numbers off of search results from Google.)  Also, by current standards, it's slow and has a small screen.

There's what looks like an even better upcoming candidate, Motorola's new Droid 3.  Faster, more capable, bigger battery (up to 1930mAh), larger screen with 40% higher resolution, larger and more complete keyboard, optional inductive charging.

The catch?

The official available-in-stores date for the Droid 3 is reported to be TOMORROW.

The last day to get grandfathered in on Verizon's unlimited data plan is TODAY.

(This is possibly not a coincidence.)


The bottom tier in the three-tiered data plan that will replace the unlimited-data plan tomorrow is $30, the same price as the about-to-end unlimited-data plan, for 2GB/month.

2GB of *data* per month.  On a phone.  That just seems like a hell of a lot more than I'd ever use.  BUT, I don't know how much data mapping and navigation (pretty much the only data features I expect I'd ever use on it, unless I write mobile-specific versions of some of my own web apps) actually use.

SO.  If you have a smartphone, and you make significant use of mapping and navigation ... about how much data do they typically use?  How much data do YOU use per month?  Assume I won't be streaming music to it, watching movies on it (movies on a sub-5" screen?  That way lies madness), or anything like that.  I'll almost certainly never install a single game on it, and the odds are against me finding a "phone app" I give a crap about, beyond the web browser and maybe a notepad (though an SSH client might be useful in rare emergencies).

How likely am I to even approach 2GB of data usage per month?  I really have no idea how much data usage mapping (likely to be infrequent) and navigation (likely even more infrequent) use up.


Various smartphone users I know elsewhere have reported typical monthly data usage, with fairly heavy data use, typically under a third of a gigabyte.  One responder reports his fiancée's data usage hovers around 1GB per month, which she achieves by more-or-less continuous use of Pandora streaming radio.

So I'd say the odds of me ever needing 2GB of phone data bandwidth in a month are slim to none ... and Slim just left town.  So I see no reason why the end of Verizon's unlimited-data plan should matter one bit to me. (Or one gigabit, so to speak.)

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Friday, June 24th, 2011 10:05 am

Important PSA for Gentoo users:

If portage updates you to x11-base/xorg-server-1.10.2, YOU MUST MANUALLY RE-EMERGE ALL OF YOUR INSTALLED X11 DRIVERS.  Portage will not do it for you automatically because it does not know that the ABI has changed.  If you do not do this, X will fail to start.

Here's a one-liner to do the job:

emerge -1 $(eix -I x11-drivers/* | grep x11-drivers | awk '{print $2}')

(For clarification, there's nothing new about major version bumps in changing the ABI.  On most previous cases, though, such as when 1.7 and 1.9 came out, the postmerge notes on the ebuild either noted that the ABI had changed and all drivers needed to be rebuilt, or pointed to the postmerge notes on the main site, which told you the same thing.  This time, however, this advisory was omitted for some reason. So, if you don't happen to remember that the version you're updating from is NOT a 1.10.x subversion, you can find yourself with an X that won't start.)

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Wednesday, May 11th, 2011 11:26 am

I've always been a fan of Mike Oldfield.  In 1994, Mike released his 16th album, The Songs of Distant Earth, based on and inspired by Arthur C. Clarke's SF novel of the same title (and with Clarke's full approval and cooperation).  The CD was re-released very shortly afterward as an "enhanced" CD containing CDROM content including a VR experience built upon the Myst engine.  A second edition of the enhanced CD added more multimedia content including the full-length video for the second track, "Let There Be Light".

So far, so good.  But all is not perfect in the digital world.  There are two problems with the enhanced CD.

The first is that the CDROM data content is encoded and formatted for PowerPC Macs running OS7 through OS9 — because, in 1996, the Mac was arguably the best common graphics platform around.  If you try to load the enhanced CD into a Windows or Linux PC, it can't read that track.  Current versions of OSX no longer include OS9 compatability and cannot run PowerPC binaries, so a current OSX Mac can't read it either.

So, forget the multimedia content, unless you have a classic Power Mac around that's still running either OS9 or an older version of OSX with the OS9 compatibility installed (which is really just an embedded OS9 installation).  But you can still play the music, right?

Well ... maybe.  You see, the other problem is that the CDROM track is the first track on the CD.  And if your optical drive and your computer can't read that first track, they can't play the CD — and in fact, in the case of Linux and Windows at least, they fail to even detect valid media in the drive. If you're using an ordinary "dumb" standalone CD player, it will simply skip the first track (because it can't read it either) and play the rest of the CD.  But if you're trying to play it on a computer?  You're pretty much boned.  (You might be able to skip past that first track and play the rest of the CD on an OSX Mac.  I haven't tried it.  I don't do Macs.)

So, if you bought the enhanced CD and want to play it on a computer, you're pretty much out of luck.  And if you're only going to play it on a regular CD player ... well, then you can't access the multimedia content, so why would you get the "enhanced" CD?

So now you want to find a copy of the audio-only original-edition CD?  Good luck with that.

Well, hey, it probably seemed like a great idea at the time.  But less than fifteen years later, its "enhanced" multimedia content is de facto unplayable, because it was coded for a niche system that de facto doesn't exist any more outside of computer museums and a few relict private collections.

There's a lesson here, and it's one that's already a significant concern among people in the computer industry who think ahead about things like this:

"Will we still be able to read this medium, or this data, in fifty years?"

Here's one we can't read after only fifteen.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Thursday, May 5th, 2011 03:12 pm

We've all heard lots of examples of security breaches, most recently Sony's entire PSN gaming network getting totally 0wn3d and virtually every bit of user credential data stolen including credit card data.  Some companies handle it well.  Some handle it poorly.  Sony actually reacted relatively well this time, shutting down PSN until they'd cleaned and re-secured it, publicly owning up to the breach, and notifying all their customers.  After the Hannafords data breach a couple of years ago, RBS Citizens Bank pre-emptively replaced all customer debit cards that had been used at a Hannafords store, just in case they might have been compromised.  (This not only protected customers, it made sound business sense too; it's cheaper to preemptively replace a bunch of cards that would have been replaced in the next year or two anyway, than to clean up fraudulent transactions later and possibly eat losses of anywhere from hundreds to thousands of dollars per card.)

Other companies haven't handled it so well.  I'm not naming any names, but there have been companies which it has transpired have suffered massive customer data breaches and simply didn't bother to tell anyone until they were outed, and other companies that only notified their customers of breaches in states where the law forced them to do so.

Here's an example of doing it right. is an online password-keeper service.  On Tuesday morning, they noticed a network traffic anomaly during routine (for them) analysis of network traffic logs.  They couldn't be certain what it was; but they couldn't be 100% certain that it wasn't evidence of a breach.

So they played safe, and handled it as though it was.

In this case, we couldn't find that root cause.  After delving into the anomaly we found a similar but smaller matching traffic anomaly from one of our databases in the opposite direction (more traffic was sent from the database compared to what was received on the server).  Because we can't account for this anomaly either, we're going to be paranoid and assume the worst: that the data we stored in the database was somehow accessed.

A lot of their customers are complaining about being forced to change their master passwords.  But this is the right way to handle a possible security breach.  If you cannot rule out a breach, you study the evidence you have, you figure out the worst probable case consistent with the available evidence — and then you handle it as though that worst case happened.  Because in the long run, it's MUCH safer, and much less expensive, to assume you were breached and later find out that you were not, than to assume you were not breached ... and later find out that, actually, yes, you were.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Thursday, April 21st, 2011 10:07 pm

Without mentioning names, earlier this year the managed-hosting provider that I have been working for since October secured a very large contract with a new customer, for whom, over the past month and a half or so, we've built over a hundred servers and three SQL DB clusters across two continents.  I personally specced and built their entire DB infrastructure.  When I got up this morning, that buildout was still in testing.

Then pretty much Amazon's entire US-East EBS/EC2 cloud went down.  Guess where New Customer's current production environment was hosted.

Was, I say.  Five hours later, when their solution was still down and Amazon still couldn't even give them an ETA for restoration, they said to us, "Um ... this new solution you're building for us?  Can we push it into production ahead of schedule?  Like ... right now?"

"Sure," we said.  "We'll be standing by in case there's any problems."

So they rather nervously loaded their live production data, switched their DNS over ... and it all came up and Just Worked, first time.  It hasn't so much as hiccupped.  Needless to say, their executive management is ecstatic (and highly impressed with us).

I love it when a plan comes together...

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Sunday, December 12th, 2010 07:53 pm

My gods, Telstra could teach even Verizon an entire trunkload of tricks when it comes to crippling phones.  I recently bought an unlocked Motorola RAZR2 V9, and, yes, sure, it's unlocked, it works with a T-Mobile SIM card, but holy Hastur in a merrywidow, even the damn PHONE MAIN MENU is locked out.  'Setup' allows you to do one thing:  enable or disable the Bluetooth indicator.  Not Bluetooth, mind you — just the indicator.

What in the name of all good bits does Telstra expect their customers to use their phones as?  Clubs?

Anyway, right now I'm in the process of trying to reflash the phone with new, generic Motorola firmware. Also, shaking my head at the sheer stupidity inherent in Motorola's software update tool, which will only update the firmware of a phone that is still within its original warranty.  Planned obsolescence...?

unixronin: Sun Ultrasparc III CPU (Ultrasparc III)
Saturday, December 11th, 2010 07:13 pm

Well, let's see.

I've had one infant-mortality on the new babylon5, the Gigabyte Radeon HD4350 video card, which ceased to output video on DVI after two weeks of life.  I replaced that with an XFX HD5570 card that was on sale, and will be sending the Gigabyte card off for repair under warranty (then probably keep it as a spare, lacking another PCI-Express machine to put it in).  The replacement card also displays the symptom of screen image downscaled to about 95% when using the HDMI connection to the Asus 27" LCD monitor, which means that's probably a glitch in the monitor that I'll have to bother Asus about, after the 28" Hanns-G comes back from its third round of warranty repair (this time to repair the permanently-stuck-on red column driver at about column 600). I'll still probably put it back on babylon5 after it comes back, because I've discovered that I really miss that extra 120 pixel rows of screen real-estate.  (The Asus is 1920x1080; the Hanns-G is pretty close to the only 1920x1200 monitor left in existence in the 27"-28" size range.)

Annoyingly, after I found tall-enough (and cheap!) VESA monitor mounts at Monoprice, I realized that while the Asus has 100x100 VESA mounting points the same as almost all of the mounting arms on the market, the Hanns-G has only 100x200 mounting points.  Fortunately, I found an adapter plate online this morning, which saves me from having to make one, to do a decent job of which I'd need access to a machine shop.

I have reasonable confidence at this point that I've gotten everything I need off excalibur, the old babylon5, and can safely wipe it and shut it down.  Not sure yet what I'll do with the old hardware. It's a single-core AthlonXP 2400+ on an Asus motherboard which is maxed out at 3GB of RAM (the new machine, also on an Asus board, is only half populated at 8GB), and on which I've already replaced all the power smoothing capacitors after the leaked, and an Intel-MegaRAID-based LSI Logic SATA-RAID controller because that was the only way I could get it to boot off SATA after the last of its SCSI disks failed.

(New babylon5 has mirrored SATA-2 SSDs, and they are truly stunningly fast.  How do you like the sound of 2.17 wall-clock seconds to create a 1GB file?)

Also on the hardware front, I just replaced the HP R3000XR UPS we've been running on for the past two years or so with an APC SU3000RMXL.  I was already sick and tired of the R3000XR's bizarre battery charge behavior¹, and when it stopped responding on its serial monitoring port, that was the final straw.  Now I just need to find my APC SmartUPS serial cables and compile apcupsd on babylon4.

Still in the queue:  two older Windows boxes² that have undemanding jobs to be replaced by low-power dual-core Atom boxes for the Pirate and Wen, and Valkyrie's desktop-case Dell GXP150 to be replaced (the plan is for her to get the current vorlon after it gets upgraded to maximum RAM, an Athlon64 X2 processor, and a reasonably recent video card).  On the infrastructure side, the main server needs a lot more RAM and an entire new rack of disks (and could use more CPU, but that's a separate problem), and the DB server also needs more RAM.  Then we'll be in pretty good shape again except for wireless and laptops.  Especially if I can replace the missing rackmount ear for my gigabit switch without having to pay Dell's prices for it (they want as much for a replacement rackmount kit as it would cost me to buy a whole new gigabit switch).

It appears we're even going to be able to get rid of our balky Color Laserjet 4500DN, which seems to have given up the ghost during the couple of weeks it just spent out in the unheated deckhouse.  We received an inquiry about whether we'd be interested in donating it for parts.  Yes, certainly.  Can I interest you in a 3KVA HP UPS as well...?

[1]  Left to itself, it would fully charge its battery pack to 100%, hold it there for 48 hours almost exactly, crash-drain it to just below 50%, gradually drain it further down to 40% charge over a period of about twelve hours, then maintain it at around 40% charge for about 32 days give or take a few hours.  Then it would charge it back to 100% and do the same thing all over again.  Plus, it took herculean efforts and about ten minutes reading and re-reading of the operators' manual every time to get the damned thing to power down.  HPaq = UPS FAIL.

[2]  One is a Pentium-III, I think; the other a single-core AthlonXP 1500+ even older than excalibur, which — worse yet — is running at only three-quarter speed because it has PC100 RAM instead of the PC133 it should have, but is so old and slow it's not worth fixing.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Wednesday, December 1st, 2010 04:20 pm

Coming soon to a fiber near you?  Interesting reading.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Wednesday, December 1st, 2010 12:48 pm

As reported here, on Marketwatch, and again here, on Techland, Comcast is extorting Netflix and Level 3 Communications to allow Comcast customers access to Netflix content.

"Nice business you have here.  It'd be a shame if anything were to, you know, happen to it."

This needs to be crushed like a bug, and soon.  Level3 made a strategic error in paying the fee at all in the first place.  "Once you have paid the Danegeld, you'll never be rid of the Dane."

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Monday, November 29th, 2010 09:49 pm

Specifically, for yet another "clever" Apple technology with unintended consequences.

You see, ever since getting my new machine up, I've been noticing a really excessive amount of clock drift relative to my internal stratum-3 network timeserver.  Like, six seconds per day of clock drift.  This is really bad considering I'm running ntp.  The old babylon5 had some clock drift, but never this bad.

Today, I finally found the problem.  And the problem is that mDNS is shooting me in the foot.

mDNS is Apple's multicast DNS technology, and certain Linux packages — CUPS, for one, also created at Apple — are really rather difficult to persuade not to use it.  But when you install it on linux, it never gets properly configured.  The file /etc/nss-mdns.conf — which should contain a list of domains for which mdns SHOULD resolve IP addresses — is not created, and the package doesn't tell you that you should.  What's more, there's no documentation installed on doing so.

Now, this shouldn't be a problem.  You see, here's what should, if mdns were reasonably designed, happen when, say, ntpdate tries to synchronize to a timeserver:

  1. ntpdate requests an IP address for the designated timeserver.
  2. nsswitch checks the /etc/hosts file.  It doesn't find the IP there, so it passes the request to mDNS.
  3. mDNS checks /etc/nss-mdns.conf and finds it empty or nonexistent.  mDNS says "Oh, no domains for me to resolve, nothing for me to do here", and passes the request to DNS.
  4. DNS looks up the address, passes it back to ntpdate, and ntpdate makes the connection to the correct server.

But, Apple being Apple, this is what REALLY happens:

  1. ntpdate requests an IP address for the designated timeserver.
  2. nsswitch checks the /etc/hosts file.  It doesn't find the IP there, so it passes the request to mDNS.
  3. mDNS checks /etc/nss-mdns.conf and finds it empty or nonexistent.  mDNS says "Oh, there's no list of domains for me to resolve, but you didn't explicitly tell me NOT to resolve anything, so here, LET ME GIVE YOU SOME RANDOM IP ADDRESS THAT I PULLED OUT OF SOMEONE ELSE'S ASS BASED ON MY OWN HIDDEN COMPILED-IN DEFAULTS.  It's not even REMOTELY in your network, but what the heck, what you don't know won't hurt you, right?"
  4. "Gee, ntpd is running, but my clock's still wrong.  How can that happen...?"

Apple has from time to time come up with some very good ideas.  The aforementioned CUPS, for example, works very well.  My opinion of Apple would, however, be rather less jaundiced did it not so frequently appear that when designing or implementing a technology, Apple looks at what the rest of the world is doing, then deliberately does something incompatible.¹

I disabled mdns in /etc/nsswitch.conf, restarted ntp, and SHAZAM!  Magically, just like that, my machines' clocks are perfectly synchronized.

[1]  Possibly the classic example of all time² is Apple's proprietary Appletalk network protocol, in which network address assignment is basically accomplished by jabbering.  In every OTHER network protocol that I'm aware of on the face of the planet, jabbering is considered to be a severe networking failure.  This is why, whenever an Appletalk network exceeded a certain rather small number of nodes, nodes would start just randomly dropping in and out of view on the network.  There was nothing you could do about it.  You couldn't fix it.  Apple eventually sort of fixed it, by creating Appletalk Phase II, which divided networks up into zones, and an Appletalk device could only talk to other devices in the same zone that it was in.  This didn't fix the problem, but it sort of alleviated it.  Somewhat.

[2]  Though a close runner-up is the Apple one-global-system-menubar metaphor, which was bad enough on single-monitor Macs, but really broke hard when Macs started being able to handle multiple monitors.  A list of everything that's wrong with the single-system-menubar metaphor would be a post of its own.  Yet twenty years on, MacOS users are STILL stuck with it.³  Apple insisted that This Was The Best Way To Do It, because they had called in Efficiency Experts.  What I've always wanted to know is, did the "efficiency experts" know anything whatsoever about using computers in the real world?

[3]  The saddest part is, they don't care, because The Steve Has Decreed That It Is Good.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Friday, October 1st, 2010 07:26 am

Sergei Shevchenko decodes a multilayered, multi-version matryoshka trojan hidden in a Flash video.

When this master exploit encounters one of these twelve combinations [of platform and Windows version number], it will unpack the string by interpreting it as a hex-encoded byte sequence, and then reload this code as a Flash file via loader.loadBytes().  I don't believe it:  A Flash file which is decoded dynamically and then loaded proceeds to load another dynamically created Flash file from a repertoire of twelve strings.  But it's true.  When I copy the first string into my hex editor, I can clearly detect the structure of a, this time uncompressed, Flash file.

That's what I call professional!  The various versions of the Flash environment differ to such a degree that exploits which rely on specific addresses only cause the Player to crash, which doesn't serve the attackers.  Therefore, they either have to write very generic shell code – which is quite difficult, or they arm themselves with an arsenal of very specific exploits and then choose the one that fits the respective Flash Player.  Quite obviously, our little criminal has chosen the easier way and is carrying a whole arsenal of weapons.


And that malware authors now use the obfuscation strategies already known from conventional Win32 malware in virtual environments such as Flash is bad news for the anti-virus vendors.  Because it ultimately means that AV vendors will from now on also need to emulate the run-time environments of JavaScript, the .NET framework and Adobe Action Script to reliably detect malware.

This is a very clever exploit.  And I seem to find myself saying that altogether too frequently of late.

The first generation of malware was one-off exploits, frequentyly fairly simple, put together usually by single individuals who launched them themselves for the notoriety, for extortion, or just to see what they could wreck.  The second generation was the polymorphic virus development toolkits that let basically any black-hat wanna-be build a virus with a handy point-and-drool interface, and fast-spreading blitz worms like Code Red.  And the third generation...

Well, the third is sophisticated, professionally-crafted attacks like this, like Stuxnet, like Conficker.

The world has changed.  Again.  And not for the better.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Wednesday, September 29th, 2010 04:43 pm

New keyboard is shiny well, more flat black than shiny, actually.  Very tasteful and glare-free.  New keyboard has excellent key feel, is comfortable, seems well made, and hasn't exhibited any glitches so far.  Key caps are crisply molded and seem well marked.

Time will tell.

By the way, NewEgg's photo is old.  Adesso's, not surprisingly, reflects the current keyboard layout.

I think I'm going to like this keyboard.  I've only had it an hour or two and I'm just about up to full speed on it already.

unixronin: A somewhat Borg-ish high-tech avatar (Techno/geekdom)
Thursday, September 23rd, 2010 12:49 pm

OK, so the Conficker worm was smart and scary (though it ultimately appeared to fizzle; there is still speculation that it was a proof-of-concept).  It looks like Stuxnet might be scarier, in its own way.

Stuxnet makes use of two compromised digital certificates and four known zero-day Windows vulnerabilities; selectively uses a fifth vulnerability (the same one exploited by Conficker) on its target systems, where they're likely to be unpatched; it can infect a system running any version of Windows from a USB stick, upon insertion, with no additional user action required whatsoever; it limits the number of systems it infects, to try to avoid attracting attention; it specifically looks for SCADA systems to take over; and it knows how to reprogram SCADA systems.  The term "cyber missile" is being used to describe it.  It appears to be incredibly specifically targeted; it "fingerprints" systems that it infects in order to identify its target, looking for specific code in specific locations on specific programmable logic controllers, but there are no clear indications whether it has found its target yet.  It leaves systems that have the wrong "fingerprint" alone and doesn't interfere with them.  There is apparently speculation that (a) it's too sophisticated to be an "amateur" effort, (b) that it's possibly targeted at Iran's Bushehr nuclear reactor, and (c) that it may have already done its work — as Bushehr did not come online in August as it was supposed to, which Iran has explained away as "hot weather".  (Hot weather preventing a facility in Iran from coming online?  That sounds a bit like a ship not being launched on schedule because the ocean was wet.)

Links: Computerworld, Christian Science Monitor, a second CSM article, NetworkWorld, and Knowledge Brings Fear (blog; the author thinks it is targeted at Iran's uranium-enrichment centrifuges rather than at Bushehr itself, and cites Wikileaks and the BBC for supporting evidence).

unixronin: Sun Ultrasparc III CPU (Ultrasparc III)
Monday, September 13th, 2010 09:06 am

Operating system upgrades are a pain in the ass.  It's a natural law, right?  You have to bring the system down, reboot off a newer install system, upgrade, pray it goes right, then spend days reinstalling things the upgrade broke, migrating applications the upgrade missed, and reconfiguring all your preferences that the upgrade reset back to their defaults.  Right?

Well, not always.  Sun^WOracle recently issued Solaris 10 u9 09/2010 (not for general use; it's an unsupported developer release), and I took the opportunity to try out the Solaris LiveUpgrade tool.  LiveUpgrade makes a complete OS upgrade almost a no-brainer — and as the name implies, you can do it on a running system while continuing to use the system you're upgrading.  If you're running on ZFS, you don't even need to mess with any disk filesystems; all necessary changes are done for you automagically.

Here's a capsule summary of the process:

  1. Mount the OS upgrade DVD image via loopback (yes, you can do this without even burning a physical disc).
  2. Install the current live-upgrade package from the DVD image.
  3. Run lucreate to create a duplicate copy of your boot environment.
  4. Run luupdate to update the duplicate boot environment to the new release. Wait 20-30 minutes while it works.
  5. Mount the new boot environment at an alternate mountpoint and check the upgrade log to see if it needs any post-upgrade cleanup.  Do whatever is necessary.  You shouldn't have to touch more than a handful of files.  (Unmount it again when you're done.)
  6. Run luactivate to activate the new boot environment.  Wait a couple of minutes while it prepares files and updates the bootloader.
  7. Reboot the machine via either shutdown -r now or init 6.
  8. Check that all services restarted cleanly after reboot.

Shazam!  That's it.  You're DONE.  All of your OS packages are upgraded.  All of your third-party software is copied over, untouched, exactly as it was.  Almost all of your settings are preserved, and anything LiveUpgrade reverted, it told you about and you've already had the opportunity to re-customize it before rebooting.  And you still have your original, pre-upgrade OS image to fall back to in the event you run into a problem.  (In fact, if you're running on ZFS, you have the original un-upgraded OS, and a snapshot of the original OS at the moment you started lucreate, and a snapshot of the new boot environment right before you ran luupgrade.  If you ran into a problem with luupgrade, you can roll the new environment back to that snapshot and re-run luupgrade.)

This is how all OS upgrades should work.

unixronin: Sun Ultrasparc III CPU (Ultrasparc III)
Tuesday, September 7th, 2010 09:28 pm

Sometime about 0440 Sunday morning, my main server, babylon4, went down hard and fast.  I still haven't been able to reconstruct a single thing about what actually happened, but it left the machine down, with the boot archive corrupt and the boot blocks completely gone.  The last thing logged before it went down — about half an hour before — was Apache2 logged some probably-acne-ridden git trying in vain to probe for phpMyAdmin holes.  (It's not installed.  Neither is PHP.)  Just to add a weird touch, whatever happened apparently sent a break over the serial console line to epsilon3 and halted it too.

I didn't discover this until I got up on Sunday.  I fairly quickly discovered that all of the ZFS filesystems and their data were completely intact; the system just couldn't be booted.  I spent essentially all of Sunday trying various different ways to repair the boot blocks and boot archive, not one of them successful.  I managed once to boot it by hand using the grub on a Solaris 10 install DVD and the failsafe miniroot from my Solaris installation, but that wasn't any help because ZFS on-disk had been patched to a newer version than originally installed and the ZFS patch didn't patch the ZFS drivers in the miniroot.

Well, I never got around to live-upgrading the machine to Solaris 10 u8 10/09, anyway.  So on Sunday I backed up the user-data filesystems in the root ZFS pool over to the main array pool using ZFS snapshots, blew away the root pool, and reinstalled 10/09 from scratch, then on Monday morning set about reinstalling third-party packages and reconfiguring the Solaris ones the way I wanted them.  I had a few minor fights with smf, the Solaris Service Management Facility, but after it saw reason and agreed to do things my way, I had most of it all set up and running again Monday night, and finished setting up the last group of services today.  Thanks to the ZFS snapshot gambit, I didn't lose a single file or configuration setting.

Of course, being a prudent sort, right now I have a fresh set of full backups running.