Profile

unixronin: Galen the technomage, from Babylon 5: Crusade (Default)
Unixronin

December 2012

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Expand Cut Tags

No cut tags
Wednesday, March 11th, 2009 05:25 pm

Disney and Scholastic share a Software Hall of Shame raspberry today.

For what?  Disney Magic Artist Deluxe and Scholastic I Spy Fantasy.

Why?

Because they're both children's games — young children, as in Wendy's is giving I Spy Fantasy away in Kids' Meals — that require Administrator privileges to run.

FAIL.

Don't these people ever think before they write code?

Thursday, March 12th, 2009 02:55 am (UTC)
I disagree. The client rarely knows the rigor requirement.

As an example: imagine that you're told, "we need you to do X on these SQL tables, and it's okay if things blow up." You do X and things blow up.

"No problem," your client says, "restore it from backup." Sure. Where's your backup? "Here."

... but these files are corrupt. Didn't you guys test this backup before you put it in storage?

"We didn't know you needed to do that."

Yes, I have had that conversation with people before. That's one of the reasons why no, I do not believe clients know what their rigor requirements are.

That's why we have requirements gathering as a phase of software development.
Thursday, March 12th, 2009 03:34 am (UTC)
The client knows that the table is critical for business operation. The client knows that the update needs to work on all the table in one shot. The client knows it is important.

In that situation, I do a backup of the table before I start! I also verify that I have a good copy before I start. If I have the space, I like two copies of the data I am going to modify. Then, if I have the time, I do a full verification of the changed data. If there is time to do a restore from backup, there is time to satisfy my verification needs.

We are talking at cross purposes. The client knows how important it is to get things right, and in what pass and timeframe. We need to determine what we need in order to meet that requirement. That is our job. If we have a potential need for a restore from backup, we need to ensure that the backup is sound. That is the extrinsic information that we need to infer from our requirements gathering phase. That is the language translation I was thinking about. Changing things from customer vocabulary to CS vocabulary. Once we have a common language, the customer knows what his needs are.
Thursday, March 12th, 2009 03:40 am (UTC)
You're cherrypicking at this point. What you're saying is, "in that situation, rather than rely on what the client knows to be true, I would have investigated and made sure of the one thing which went wrong in your situation."

But the domain of possible problems like that is enormous. That sort of cherrypicking isn't possible except in hindsight.

I operate under the belief the client knows their business operations and knows the outcome they want. Everything else — everything else — must be determined independently rather than taken on faith. Outside of their business operations and their desired outcome, I have to assume the client is dead wrong about at least one thing that will bite me in the ass — and it's my job to find that thing before it has the opportunity to do so.

That means I can't take the client at their word about rigor.
Thursday, March 12th, 2009 03:55 am (UTC)
Not cherry picking, I have been there, done that. I get really anal about the things that can go wrong in a significant update. I check everything under my control. Experience/training makes those kinds of checks incidental. (That is a big part of what the customer is paying for, that value is worth not going to the lowest bidder.)

What you are saying is that the client knows how critical the operation is. That translates to rigor. The more important to the business, the greater the rigor required. I think we are saying the same thing.

My first law states, If you plan for a contingency, it will never happen. There is a corollary... At some point, it comes down to judgment. When have you searched far enough afield for the biter?
Thursday, March 12th, 2009 11:06 am (UTC)
I still disagree and I still don't think we're talking at cross purposes.

In my example with the botched SQL update, the client knew the job had to be done — the client didn't believe the operation was critical. It was a routine operation that had a low chance of exploding. The client didn't understand the risks they were facing and didn't understand the consequences of those risks coming to pass (mostly, "we run around and scream wildly"). They were willing to pay a low rate for a couple of hours of work because they were satisfied the job didn't warrant more than that.

As you say, "I check everything under my control." It's a great policy and I agree with it. I just emphatically disagree that "experience/training makes those kinds of checks incidental." Experience and training can reduce the amount of time necessary to spend on this overhead while still maintaining your level of professional diligence, but except for trivially small projects I don't see how that overhead can ever be minimized to the level where it can be called incidental.

I share in the spirit of your first law. It's not my first law, but it's pretty high on the list. I usually phrase it as "no crisis ever came from a controlled failure." Software failure is not necessarily a bad thing. There was a plane crash a while ago in South America where a Boeing autopilot scaled back the engines on landing when the altimeter reading dropped abruptly from 2000ft above ground level to -8 feet AGL. If the autopilot software had assumed the altimeter was capable of being batshit insane from time to time and reacted appropriately, the disaster would probably have been avoided. The altimeter's failure was not the source of the crisis; that was the autopilot's inability to control the failure.