Profile

unixronin: Galen the technomage, from Babylon 5: Crusade (Default)
Unixronin

December 2012

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Expand Cut Tags

No cut tags
Wednesday, March 11th, 2009 05:25 pm

Disney and Scholastic share a Software Hall of Shame raspberry today.

For what?  Disney Magic Artist Deluxe and Scholastic I Spy Fantasy.

Why?

Because they're both children's games — young children, as in Wendy's is giving I Spy Fantasy away in Kids' Meals — that require Administrator privileges to run.

FAIL.

Don't these people ever think before they write code?

Wednesday, March 11th, 2009 09:31 pm (UTC)
I doubt that they think during or after writing the code either. I also suspect that there aren't any good reasons why they need administrator privileges. Most likely they are writing temp files in forbidden locations or something stupid like that.
Wednesday, March 11th, 2009 10:01 pm (UTC)
I can tell you exactly why they both need administrator privileges. It's because, nine years after Windows stopped being a single-user OS, they still haven't figured out yet that any well-behaved Windows user program, and certainly any program intended to EVER be run by an unprivileged user, needs to write things like tempfiles and per-user settings within the user's Documents and Settings folder, not directly into the install folder under Program Files.

Imbeciles. It's NOT LIKE IT'S A RECENT CHANGE. It's been this way since Windows 2000 shipped. If you haven't figured it out by now, almost three major Windows versions later, you need to find another line of work.
Wednesday, March 11th, 2009 10:11 pm (UTC)
On the other hand are they new programs or just reskinning of programs that are 10 years old? I would suspect that the actual programmers moved on to new positions or different companies years ago.

I do agree that it is laziness that they haven't bothered changing with the times though.

Even current versions of windows are barely multi-user. Support for remote users and concurrent use of the computer is rudimentary at best.

Why does compentence seem like such a high bar to pass. It isn't like we are expecting greatness out of these people or programs.
Wednesday, March 11th, 2009 10:18 pm (UTC)
Games seem to be particularly bad, not just in where they want to write stuff, but also in the system settings they'll clobber.
Wednesday, March 11th, 2009 10:38 pm (UTC)
Particularly children's games. It's as though they think kids won't care when the game fucks up.
Wednesday, March 11th, 2009 10:48 pm (UTC)
It's the way they do things like reset the screen to some archaic size and colour-depth, and then leave it there when they finally exit, as if everyone should use 800x600 or whatever...
Wednesday, March 11th, 2009 10:37 pm (UTC)
On the other hand are they new programs or just reskinning of programs that are 10 years old?
Well, the I Spy requires gdiplus.dll, which first appeared in XP. So at worst it's contemporary with XP.

Frankly, I can't say I've ever been very impressed with Scholastic. The only thing they seem to be good at is urging kids to nag their parents into spending wads of money on low-budget book editions and "educational" toys that half the time are complete junk.
Wednesday, March 11th, 2009 11:17 pm (UTC)
Short answer: because competence costs money.

I am just now getting to the point in my life where I feel I could be trusted to write reasonable-sized pieces of code and not run the risk of making a total hash of it with a 1.0 release. I'm a thesis away from a Ph.D. in computer science and have over fifteen years of professional programming experience in eighteen different languages. If I were to work for a consultancy, I would likely be billed at $300/hr or more. Let's say that it takes me X amount of time to do a professional, competent job: a documentation stack, unit tests, software iterations with customer feedback at the end of each. The finished product costs $300X and is of high quality.

Compare this to someone with a two-year college degree who says "sure, I can do that, let me bash it out really quick in Java...!". They won't bother with documentation or unit tests -- you'll be lucky if they've even heard of the latter, and they won't know how to write effective engineering documentation for the former. They won't gather the requirements effectively, they won't involve the client in the design, and the final product will be dropped on the client's desk as a fait accompli. "Here. You said it should do Y, and it does. I mean, I think it does. I realized halfway through that Y is kind of ambiguous. So I just figured out what I thought you probably meant and went on from there."

The two–year college grad charges $20/hr, and spends probably .5X hours. When it comes to actual coding, I leave the kid in the dust — what took the kid a month of pure coding to do, I knocked out over a long weekend. But I had a long engineering process leading up to that, which means that on balance the kid did it in half the time and for a thirtieth the cost.

From the client's perspective, both products seem to be equivalent. Both work fine on the client's PCs, after all. They both have the appropriate eye candy. The client decides to save themselves 97% of the cost and go with the two–year college grad, and then talk to their bosses about how "we found this kid, he's awesome, he did the same job as that professional engineer in half the time and for peanuts!"

Then the two–year college grad's code gets deployed on a million desktop PCs and all hell breaks loose.

Yep. False savings.
Thursday, March 12th, 2009 01:14 am (UTC)
A quality program meets the needs of the client. Most perl code batch files are write only because it is easier, and faster, to just redo the code, rather than figure out what you were thinking when you did it the first time. That is still the quality solution. A program to be deployed "in house" does not need the stringent quality testing that something deployed "in the wild" needs, because the "in house" program has a controlled environment and available support. (In theory, don't get me started.)

Writing solid code is a habit. Writing elegant code is a gift. If you have both, you should do ok. Learning all the skills you enumerate around writing code is useful, but seldom necessary. There are things like deadlines and budgets that are often more important than full iteration tests with proven viability with test data trials. Once you know how to do the full job, success depends on knowing which parts of the full cycle are most important to the customer.

I have sat on both sides of the table, programmer and manager, business owner and customer. Providing 110% of what the customer wants delights them. Providing 200+% of what the customer wants ticks them off. (They could have had it faster AND less expensive.)

It is like me writing code in assembler. Before optimizing compilers, it was a no brainer to drop to assembly for high use functions, I could cut 70% off the time to run the program. With optimizing compilers, I can rarely get better than 10% time savings for a specific routine, and it takes much longer to code. I need very long program runs, or lots of users, to justify the time. Quality is what the customer needs, not what we are capable of providing when we go full bore.
Thursday, March 12th, 2009 01:30 am (UTC)
I find it interesting — I sketched out two points on a continuum, and your immediate assumption seems to have been (seems to! I could be wrong!) that both points are a priori invalid regardless of what the end product is. Sometimes each is appropriate. If I'm a client looking for someone to write a quick script to do a job where, if the script fails, no serious harm is done to the system, then yes, it's perfectly reasonable to hire the two–year college grad. But if I'm looking for someone to write a massive SQL update over a billion records, all of which are essential to company functioning and for which no backup has been made in the last year… well. I might very well want the other endpoint.

Unfortunately, real-world problems are not so easily categorized.

We're agreed that the wise engineer knows where to position a project on the spectrum. My point is not that we should drown the client past what their needs are — my point is the client very often does not know what their needs are, qualitywise, and does not know how to evaluate the claims of contractors. As a result of this, more often than not clients think on the basis of raw dollar figures, seeking the lowest bid and getting exactly what they paid for.

That's my answer to [livejournal.com profile] argonel's question of why competence is such a high bar to pass.
Thursday, March 12th, 2009 02:31 am (UTC)
The customer may not be able to articulate what his rigor requirements are, but they know them.

You need to sell based on value, not price. In real terms, very few people shop on price alone, and you don't want them as customers anyway! They pay you twice for doing business with a competitor.

The continuum is a correct concept. There is a vast difference between a NASA/JPL program requirement and a daily backup perl script trigger. Your customer (in some cases, your employer) knows what the need is. Even on the same job, not all programs need the same level of assurance/testing. The client knows! You just need to find a language in common so that those needs are understood.
Thursday, March 12th, 2009 02:55 am (UTC)
I disagree. The client rarely knows the rigor requirement.

As an example: imagine that you're told, "we need you to do X on these SQL tables, and it's okay if things blow up." You do X and things blow up.

"No problem," your client says, "restore it from backup." Sure. Where's your backup? "Here."

... but these files are corrupt. Didn't you guys test this backup before you put it in storage?

"We didn't know you needed to do that."

Yes, I have had that conversation with people before. That's one of the reasons why no, I do not believe clients know what their rigor requirements are.

That's why we have requirements gathering as a phase of software development.
Thursday, March 12th, 2009 03:34 am (UTC)
The client knows that the table is critical for business operation. The client knows that the update needs to work on all the table in one shot. The client knows it is important.

In that situation, I do a backup of the table before I start! I also verify that I have a good copy before I start. If I have the space, I like two copies of the data I am going to modify. Then, if I have the time, I do a full verification of the changed data. If there is time to do a restore from backup, there is time to satisfy my verification needs.

We are talking at cross purposes. The client knows how important it is to get things right, and in what pass and timeframe. We need to determine what we need in order to meet that requirement. That is our job. If we have a potential need for a restore from backup, we need to ensure that the backup is sound. That is the extrinsic information that we need to infer from our requirements gathering phase. That is the language translation I was thinking about. Changing things from customer vocabulary to CS vocabulary. Once we have a common language, the customer knows what his needs are.
Thursday, March 12th, 2009 03:40 am (UTC)
You're cherrypicking at this point. What you're saying is, "in that situation, rather than rely on what the client knows to be true, I would have investigated and made sure of the one thing which went wrong in your situation."

But the domain of possible problems like that is enormous. That sort of cherrypicking isn't possible except in hindsight.

I operate under the belief the client knows their business operations and knows the outcome they want. Everything else — everything else — must be determined independently rather than taken on faith. Outside of their business operations and their desired outcome, I have to assume the client is dead wrong about at least one thing that will bite me in the ass — and it's my job to find that thing before it has the opportunity to do so.

That means I can't take the client at their word about rigor.
Thursday, March 12th, 2009 03:55 am (UTC)
Not cherry picking, I have been there, done that. I get really anal about the things that can go wrong in a significant update. I check everything under my control. Experience/training makes those kinds of checks incidental. (That is a big part of what the customer is paying for, that value is worth not going to the lowest bidder.)

What you are saying is that the client knows how critical the operation is. That translates to rigor. The more important to the business, the greater the rigor required. I think we are saying the same thing.

My first law states, If you plan for a contingency, it will never happen. There is a corollary... At some point, it comes down to judgment. When have you searched far enough afield for the biter?
Thursday, March 12th, 2009 11:06 am (UTC)
I still disagree and I still don't think we're talking at cross purposes.

In my example with the botched SQL update, the client knew the job had to be done — the client didn't believe the operation was critical. It was a routine operation that had a low chance of exploding. The client didn't understand the risks they were facing and didn't understand the consequences of those risks coming to pass (mostly, "we run around and scream wildly"). They were willing to pay a low rate for a couple of hours of work because they were satisfied the job didn't warrant more than that.

As you say, "I check everything under my control." It's a great policy and I agree with it. I just emphatically disagree that "experience/training makes those kinds of checks incidental." Experience and training can reduce the amount of time necessary to spend on this overhead while still maintaining your level of professional diligence, but except for trivially small projects I don't see how that overhead can ever be minimized to the level where it can be called incidental.

I share in the spirit of your first law. It's not my first law, but it's pretty high on the list. I usually phrase it as "no crisis ever came from a controlled failure." Software failure is not necessarily a bad thing. There was a plane crash a while ago in South America where a Boeing autopilot scaled back the engines on landing when the altimeter reading dropped abruptly from 2000ft above ground level to -8 feet AGL. If the autopilot software had assumed the altimeter was capable of being batshit insane from time to time and reacted appropriately, the disaster would probably have been avoided. The altimeter's failure was not the source of the crisis; that was the autopilot's inability to control the failure.
Thursday, March 12th, 2009 03:02 pm (UTC)
Short answer: because competence costs money.

This.
Wednesday, March 11th, 2009 11:04 pm (UTC)
Unfortunately, there are still a lot of Windows 98/ME installations out there.

Win2K was meant for Corporate America, not for home users. They got to wait until 2001, and even then, adoption wasn't really big until mid-2002.

XP is six and a half years old. Long in the tooth, sure — but if you're going to make a mass-market giveaway promotion, you have to consider the remaining 98/ME crowd. Much as I would like it to be otherwise.
Thursday, March 12th, 2009 01:22 am (UTC)
XP is six and a half years old. Long in the tooth, sure — but if you're going to make a mass-market giveaway promotion, you have to consider the remaining 98/ME crowd. Much as I would like it to be otherwise.
True, but it still needs to run on the XP boxen too. And preferably the Vista ones.
Wednesday, March 11th, 2009 10:45 pm (UTC)
No. They're not thinking before they write the code. They're thinking of their paychecks, the beer they're going to drink at the end of hte day, and hopefully the individual they're going to lay afterward. or if not, their right hand.
Thursday, March 12th, 2009 01:25 am (UTC)
My guess is that they used a program environment or library set to cut program time significantly. It is the environment that requires administrator privileges. The sad part is, replacing just a few functions in the library (or Windoze API) could eliminate the admin requirement entirely. The code shells used to run these games have been around since Windows 3.1, some without being updated, other than to be able to run on the NT kernel.

No matter what we may want to believe, Windoze is still a single user OS, that just happens to be better at task switching than DOS. Trying to make it something else is just so much wishful thinking. If Micro$oft attempts to rearchitect the system to be more secure or a better client, they will break so many legacy applications that the transition to linux will be automatic. Micro$oft is screwed, they can't fix it, and can't change it. If they weren't so amoral about keeping the transition from happening, I would almost feel sorry for them.
Thursday, March 12th, 2009 01:27 am (UTC)
Will it run under WINE?
Thursday, March 12th, 2009 02:10 am (UTC)
I don't know. I don't have WINE set up anywhere.
Thursday, March 12th, 2009 02:35 am (UTC)
Another system with lots of picky settings to get just right. It works really well if the family systems are linux. It also is pretty good for older Windows programs. Hight demand graphics applications are where it tends to fall apart. (Meaning many games.)