Profile

unixronin: Galen the technomage, from Babylon 5: Crusade (Default)
Unixronin

December 2012

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829
3031     

Most Popular Tags

Expand Cut Tags

No cut tags
Wednesday, March 11th, 2009 05:25 pm

Disney and Scholastic share a Software Hall of Shame raspberry today.

For what?  Disney Magic Artist Deluxe and Scholastic I Spy Fantasy.

Why?

Because they're both children's games — young children, as in Wendy's is giving I Spy Fantasy away in Kids' Meals — that require Administrator privileges to run.

FAIL.

Don't these people ever think before they write code?

Thursday, March 12th, 2009 01:14 am (UTC)
A quality program meets the needs of the client. Most perl code batch files are write only because it is easier, and faster, to just redo the code, rather than figure out what you were thinking when you did it the first time. That is still the quality solution. A program to be deployed "in house" does not need the stringent quality testing that something deployed "in the wild" needs, because the "in house" program has a controlled environment and available support. (In theory, don't get me started.)

Writing solid code is a habit. Writing elegant code is a gift. If you have both, you should do ok. Learning all the skills you enumerate around writing code is useful, but seldom necessary. There are things like deadlines and budgets that are often more important than full iteration tests with proven viability with test data trials. Once you know how to do the full job, success depends on knowing which parts of the full cycle are most important to the customer.

I have sat on both sides of the table, programmer and manager, business owner and customer. Providing 110% of what the customer wants delights them. Providing 200+% of what the customer wants ticks them off. (They could have had it faster AND less expensive.)

It is like me writing code in assembler. Before optimizing compilers, it was a no brainer to drop to assembly for high use functions, I could cut 70% off the time to run the program. With optimizing compilers, I can rarely get better than 10% time savings for a specific routine, and it takes much longer to code. I need very long program runs, or lots of users, to justify the time. Quality is what the customer needs, not what we are capable of providing when we go full bore.
Thursday, March 12th, 2009 01:30 am (UTC)
I find it interesting — I sketched out two points on a continuum, and your immediate assumption seems to have been (seems to! I could be wrong!) that both points are a priori invalid regardless of what the end product is. Sometimes each is appropriate. If I'm a client looking for someone to write a quick script to do a job where, if the script fails, no serious harm is done to the system, then yes, it's perfectly reasonable to hire the two–year college grad. But if I'm looking for someone to write a massive SQL update over a billion records, all of which are essential to company functioning and for which no backup has been made in the last year… well. I might very well want the other endpoint.

Unfortunately, real-world problems are not so easily categorized.

We're agreed that the wise engineer knows where to position a project on the spectrum. My point is not that we should drown the client past what their needs are — my point is the client very often does not know what their needs are, qualitywise, and does not know how to evaluate the claims of contractors. As a result of this, more often than not clients think on the basis of raw dollar figures, seeking the lowest bid and getting exactly what they paid for.

That's my answer to [livejournal.com profile] argonel's question of why competence is such a high bar to pass.
Thursday, March 12th, 2009 02:31 am (UTC)
The customer may not be able to articulate what his rigor requirements are, but they know them.

You need to sell based on value, not price. In real terms, very few people shop on price alone, and you don't want them as customers anyway! They pay you twice for doing business with a competitor.

The continuum is a correct concept. There is a vast difference between a NASA/JPL program requirement and a daily backup perl script trigger. Your customer (in some cases, your employer) knows what the need is. Even on the same job, not all programs need the same level of assurance/testing. The client knows! You just need to find a language in common so that those needs are understood.
Thursday, March 12th, 2009 02:55 am (UTC)
I disagree. The client rarely knows the rigor requirement.

As an example: imagine that you're told, "we need you to do X on these SQL tables, and it's okay if things blow up." You do X and things blow up.

"No problem," your client says, "restore it from backup." Sure. Where's your backup? "Here."

... but these files are corrupt. Didn't you guys test this backup before you put it in storage?

"We didn't know you needed to do that."

Yes, I have had that conversation with people before. That's one of the reasons why no, I do not believe clients know what their rigor requirements are.

That's why we have requirements gathering as a phase of software development.
Thursday, March 12th, 2009 03:34 am (UTC)
The client knows that the table is critical for business operation. The client knows that the update needs to work on all the table in one shot. The client knows it is important.

In that situation, I do a backup of the table before I start! I also verify that I have a good copy before I start. If I have the space, I like two copies of the data I am going to modify. Then, if I have the time, I do a full verification of the changed data. If there is time to do a restore from backup, there is time to satisfy my verification needs.

We are talking at cross purposes. The client knows how important it is to get things right, and in what pass and timeframe. We need to determine what we need in order to meet that requirement. That is our job. If we have a potential need for a restore from backup, we need to ensure that the backup is sound. That is the extrinsic information that we need to infer from our requirements gathering phase. That is the language translation I was thinking about. Changing things from customer vocabulary to CS vocabulary. Once we have a common language, the customer knows what his needs are.
Thursday, March 12th, 2009 03:40 am (UTC)
You're cherrypicking at this point. What you're saying is, "in that situation, rather than rely on what the client knows to be true, I would have investigated and made sure of the one thing which went wrong in your situation."

But the domain of possible problems like that is enormous. That sort of cherrypicking isn't possible except in hindsight.

I operate under the belief the client knows their business operations and knows the outcome they want. Everything else — everything else — must be determined independently rather than taken on faith. Outside of their business operations and their desired outcome, I have to assume the client is dead wrong about at least one thing that will bite me in the ass — and it's my job to find that thing before it has the opportunity to do so.

That means I can't take the client at their word about rigor.
Thursday, March 12th, 2009 03:55 am (UTC)
Not cherry picking, I have been there, done that. I get really anal about the things that can go wrong in a significant update. I check everything under my control. Experience/training makes those kinds of checks incidental. (That is a big part of what the customer is paying for, that value is worth not going to the lowest bidder.)

What you are saying is that the client knows how critical the operation is. That translates to rigor. The more important to the business, the greater the rigor required. I think we are saying the same thing.

My first law states, If you plan for a contingency, it will never happen. There is a corollary... At some point, it comes down to judgment. When have you searched far enough afield for the biter?
Thursday, March 12th, 2009 11:06 am (UTC)
I still disagree and I still don't think we're talking at cross purposes.

In my example with the botched SQL update, the client knew the job had to be done — the client didn't believe the operation was critical. It was a routine operation that had a low chance of exploding. The client didn't understand the risks they were facing and didn't understand the consequences of those risks coming to pass (mostly, "we run around and scream wildly"). They were willing to pay a low rate for a couple of hours of work because they were satisfied the job didn't warrant more than that.

As you say, "I check everything under my control." It's a great policy and I agree with it. I just emphatically disagree that "experience/training makes those kinds of checks incidental." Experience and training can reduce the amount of time necessary to spend on this overhead while still maintaining your level of professional diligence, but except for trivially small projects I don't see how that overhead can ever be minimized to the level where it can be called incidental.

I share in the spirit of your first law. It's not my first law, but it's pretty high on the list. I usually phrase it as "no crisis ever came from a controlled failure." Software failure is not necessarily a bad thing. There was a plane crash a while ago in South America where a Boeing autopilot scaled back the engines on landing when the altimeter reading dropped abruptly from 2000ft above ground level to -8 feet AGL. If the autopilot software had assumed the altimeter was capable of being batshit insane from time to time and reacted appropriately, the disaster would probably have been avoided. The altimeter's failure was not the source of the crisis; that was the autopilot's inability to control the failure.