Profile

unixronin: Galen the technomage, from Babylon 5: Crusade (Default)
Unixronin

December 2012

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Expand Cut Tags

No cut tags
Monday, November 29th, 2010 09:49 pm

Specifically, for yet another "clever" Apple technology with unintended consequences.

You see, ever since getting my new machine up, I've been noticing a really excessive amount of clock drift relative to my internal stratum-3 network timeserver.  Like, six seconds per day of clock drift.  This is really bad considering I'm running ntp.  The old babylon5 had some clock drift, but never this bad.

Today, I finally found the problem.  And the problem is that mDNS is shooting me in the foot.

mDNS is Apple's multicast DNS technology, and certain Linux packages — CUPS, for one, also created at Apple — are really rather difficult to persuade not to use it.  But when you install it on linux, it never gets properly configured.  The file /etc/nss-mdns.conf — which should contain a list of domains for which mdns SHOULD resolve IP addresses — is not created, and the package doesn't tell you that you should.  What's more, there's no documentation installed on doing so.

Now, this shouldn't be a problem.  You see, here's what should, if mdns were reasonably designed, happen when, say, ntpdate tries to synchronize to a timeserver:

  1. ntpdate requests an IP address for the designated timeserver.
  2. nsswitch checks the /etc/hosts file.  It doesn't find the IP there, so it passes the request to mDNS.
  3. mDNS checks /etc/nss-mdns.conf and finds it empty or nonexistent.  mDNS says "Oh, no domains for me to resolve, nothing for me to do here", and passes the request to DNS.
  4. DNS looks up the address, passes it back to ntpdate, and ntpdate makes the connection to the correct server.

But, Apple being Apple, this is what REALLY happens:

  1. ntpdate requests an IP address for the designated timeserver.
  2. nsswitch checks the /etc/hosts file.  It doesn't find the IP there, so it passes the request to mDNS.
  3. mDNS checks /etc/nss-mdns.conf and finds it empty or nonexistent.  mDNS says "Oh, there's no list of domains for me to resolve, but you didn't explicitly tell me NOT to resolve anything, so here, LET ME GIVE YOU SOME RANDOM IP ADDRESS THAT I PULLED OUT OF SOMEONE ELSE'S ASS BASED ON MY OWN HIDDEN COMPILED-IN DEFAULTS.  It's not even REMOTELY in your network, but what the heck, what you don't know won't hurt you, right?"
  4. "Gee, ntpd is running, but my clock's still wrong.  How can that happen...?"

Apple has from time to time come up with some very good ideas.  The aforementioned CUPS, for example, works very well.  My opinion of Apple would, however, be rather less jaundiced did it not so frequently appear that when designing or implementing a technology, Apple looks at what the rest of the world is doing, then deliberately does something incompatible.¹

I disabled mdns in /etc/nsswitch.conf, restarted ntp, and SHAZAM!  Magically, just like that, my machines' clocks are perfectly synchronized.

[1]  Possibly the classic example of all time² is Apple's proprietary Appletalk network protocol, in which network address assignment is basically accomplished by jabbering.  In every OTHER network protocol that I'm aware of on the face of the planet, jabbering is considered to be a severe networking failure.  This is why, whenever an Appletalk network exceeded a certain rather small number of nodes, nodes would start just randomly dropping in and out of view on the network.  There was nothing you could do about it.  You couldn't fix it.  Apple eventually sort of fixed it, by creating Appletalk Phase II, which divided networks up into zones, and an Appletalk device could only talk to other devices in the same zone that it was in.  This didn't fix the problem, but it sort of alleviated it.  Somewhat.

[2]  Though a close runner-up is the Apple one-global-system-menubar metaphor, which was bad enough on single-monitor Macs, but really broke hard when Macs started being able to handle multiple monitors.  A list of everything that's wrong with the single-system-menubar metaphor would be a post of its own.  Yet twenty years on, MacOS users are STILL stuck with it.³  Apple insisted that This Was The Best Way To Do It, because they had called in Efficiency Experts.  What I've always wanted to know is, did the "efficiency experts" know anything whatsoever about using computers in the real world?

[3]  The saddest part is, they don't care, because The Steve Has Decreed That It Is Good.

Tags:
Tuesday, November 30th, 2010 03:57 am (UTC)
I suspect that a global system menubar always at the top of the screen made sense ergonomically when the screen was the original Mac screen (at 640x480 sort of size) and the alternative was a menu bar that was annoyingly close to the top, but not quite. (This is basically the netbook problem again, and some of those installs are trying top-of-screen menu bars for the same reason.)

However it started failing when screen real estate got big enough that you could have multiple non-overlapping windows of a reasonable size, some of which were a long way from the top of the screen requiring a lot of mouse mileage to get to/from the menu. And it's downright annoying on a multi-monitor setup. So it's not entirely obvious to me why they stuck with it through, eg, the OS 9 to OS X transition; or past about OS X 10.2. Other than the "always done that, never gone back to check if the reasons still make sense" reason. (Especially so because they have made some major transitions, including through multiple CPU platforms, in ways that everyone recognised were clear improvements.)

FWIW, there is a lot of crack-pipe technology around. Apple is hardly alone in creating it (and they're by no means the worst for "close, but completely different"). The stand out thing about Apple seems to be in persuading random people that it's a good idea anyway, and that they can't live without it.

Ewen
Tuesday, November 30th, 2010 04:01 am (UTC)
I also meant to say that OS X's window management, in general, sucks (eg, new application windows are usually positioned in the most pessimal location, even given ample empty screen real estate for even the most lame layout algorithm to do better). And I say this having used OS X as my primary desktop for about 18 months (after years of lots of other things). So my impression is that screen real estate management is not something which has been properly polished by Apple in years.

Ewen
Wednesday, December 1st, 2010 06:51 pm (UTC)
When compared to Larry Ellison or Steve Jobs, Bill Gates doesn't seem so bad after all as the leader of the personal computer phase of technology intrusion. I do have a Mac (G4) that I use when software drivers insist on something other than linux. It is far from my favorite machine. My first exposure to Macs was a SE30, mumble years ago. I had to write an application and some hardware access stuff. It required inventing three new cusswords a day to get through that contract. I prefer far more flexibility that Apple allows from my computing hardware. I would love an iPad, if I could X into my servers and run the applications there. Be great for running updates from anywhere in the house. I could also use it for finances and tracking while traveling. But Steve has decreed that I cannot X with an iPad. The products just frustrate me.

(I do have an iPod that I got for points. Limiting me to a Mac for updating it is really frustrating. [I could use Windoze... no.])