Disney and Scholastic share a Software Hall of Shame raspberry today.
For what? Disney Magic Artist Deluxe and Scholastic I Spy Fantasy.
Why?
Because they're both children's games — young children, as in Wendy's is giving I Spy Fantasy away in Kids' Meals — that require Administrator privileges to run.
FAIL.
Don't these people ever think before they write code?
no subject
no subject
Imbeciles. It's NOT LIKE IT'S A RECENT CHANGE. It's been this way since Windows 2000 shipped. If you haven't figured it out by now, almost three major Windows versions later, you need to find another line of work.
no subject
I do agree that it is laziness that they haven't bothered changing with the times though.
Even current versions of windows are barely multi-user. Support for remote users and concurrent use of the computer is rudimentary at best.
Why does compentence seem like such a high bar to pass. It isn't like we are expecting greatness out of these people or programs.
no subject
no subject
no subject
no subject
Frankly, I can't say I've ever been very impressed with Scholastic. The only thing they seem to be good at is urging kids to nag their parents into spending wads of money on low-budget book editions and "educational" toys that half the time are complete junk.
no subject
I am just now getting to the point in my life where I feel I could be trusted to write reasonable-sized pieces of code and not run the risk of making a total hash of it with a 1.0 release. I'm a thesis away from a Ph.D. in computer science and have over fifteen years of professional programming experience in eighteen different languages. If I were to work for a consultancy, I would likely be billed at $300/hr or more. Let's say that it takes me X amount of time to do a professional, competent job: a documentation stack, unit tests, software iterations with customer feedback at the end of each. The finished product costs $300X and is of high quality.
Compare this to someone with a two-year college degree who says "sure, I can do that, let me bash it out really quick in Java...!". They won't bother with documentation or unit tests -- you'll be lucky if they've even heard of the latter, and they won't know how to write effective engineering documentation for the former. They won't gather the requirements effectively, they won't involve the client in the design, and the final product will be dropped on the client's desk as a fait accompli. "Here. You said it should do Y, and it does. I mean, I think it does. I realized halfway through that Y is kind of ambiguous. So I just figured out what I thought you probably meant and went on from there."
The two–year college grad charges $20/hr, and spends probably .5X hours. When it comes to actual coding, I leave the kid in the dust — what took the kid a month of pure coding to do, I knocked out over a long weekend. But I had a long engineering process leading up to that, which means that on balance the kid did it in half the time and for a thirtieth the cost.
From the client's perspective, both products seem to be equivalent. Both work fine on the client's PCs, after all. They both have the appropriate eye candy. The client decides to save themselves 97% of the cost and go with the two–year college grad, and then talk to their bosses about how "we found this kid, he's awesome, he did the same job as that professional engineer in half the time and for peanuts!"
Then the two–year college grad's code gets deployed on a million desktop PCs and all hell breaks loose.
Yep. False savings.
no subject
Writing solid code is a habit. Writing elegant code is a gift. If you have both, you should do ok. Learning all the skills you enumerate around writing code is useful, but seldom necessary. There are things like deadlines and budgets that are often more important than full iteration tests with proven viability with test data trials. Once you know how to do the full job, success depends on knowing which parts of the full cycle are most important to the customer.
I have sat on both sides of the table, programmer and manager, business owner and customer. Providing 110% of what the customer wants delights them. Providing 200+% of what the customer wants ticks them off. (They could have had it faster AND less expensive.)
It is like me writing code in assembler. Before optimizing compilers, it was a no brainer to drop to assembly for high use functions, I could cut 70% off the time to run the program. With optimizing compilers, I can rarely get better than 10% time savings for a specific routine, and it takes much longer to code. I need very long program runs, or lots of users, to justify the time. Quality is what the customer needs, not what we are capable of providing when we go full bore.
no subject
Unfortunately, real-world problems are not so easily categorized.
We're agreed that the wise engineer knows where to position a project on the spectrum. My point is not that we should drown the client past what their needs are — my point is the client very often does not know what their needs are, qualitywise, and does not know how to evaluate the claims of contractors. As a result of this, more often than not clients think on the basis of raw dollar figures, seeking the lowest bid and getting exactly what they paid for.
That's my answer to
no subject
You need to sell based on value, not price. In real terms, very few people shop on price alone, and you don't want them as customers anyway! They pay you twice for doing business with a competitor.
The continuum is a correct concept. There is a vast difference between a NASA/JPL program requirement and a daily backup perl script trigger. Your customer (in some cases, your employer) knows what the need is. Even on the same job, not all programs need the same level of assurance/testing. The client knows! You just need to find a language in common so that those needs are understood.
no subject
As an example: imagine that you're told, "we need you to do X on these SQL tables, and it's okay if things blow up." You do X and things blow up.
"No problem," your client says, "restore it from backup." Sure. Where's your backup? "Here."
... but these files are corrupt. Didn't you guys test this backup before you put it in storage?
"We didn't know you needed to do that."
Yes, I have had that conversation with people before. That's one of the reasons why no, I do not believe clients know what their rigor requirements are.
That's why we have requirements gathering as a phase of software development.
no subject
In that situation, I do a backup of the table before I start! I also verify that I have a good copy before I start. If I have the space, I like two copies of the data I am going to modify. Then, if I have the time, I do a full verification of the changed data. If there is time to do a restore from backup, there is time to satisfy my verification needs.
We are talking at cross purposes. The client knows how important it is to get things right, and in what pass and timeframe. We need to determine what we need in order to meet that requirement. That is our job. If we have a potential need for a restore from backup, we need to ensure that the backup is sound. That is the extrinsic information that we need to infer from our requirements gathering phase. That is the language translation I was thinking about. Changing things from customer vocabulary to CS vocabulary. Once we have a common language, the customer knows what his needs are.
no subject
But the domain of possible problems like that is enormous. That sort of cherrypicking isn't possible except in hindsight.
I operate under the belief the client knows their business operations and knows the outcome they want. Everything else — everything else — must be determined independently rather than taken on faith. Outside of their business operations and their desired outcome, I have to assume the client is dead wrong about at least one thing that will bite me in the ass — and it's my job to find that thing before it has the opportunity to do so.
That means I can't take the client at their word about rigor.
no subject
What you are saying is that the client knows how critical the operation is. That translates to rigor. The more important to the business, the greater the rigor required. I think we are saying the same thing.
My first law states, If you plan for a contingency, it will never happen. There is a corollary... At some point, it comes down to judgment. When have you searched far enough afield for the biter?
no subject
In my example with the botched SQL update, the client knew the job had to be done — the client didn't believe the operation was critical. It was a routine operation that had a low chance of exploding. The client didn't understand the risks they were facing and didn't understand the consequences of those risks coming to pass (mostly, "we run around and scream wildly"). They were willing to pay a low rate for a couple of hours of work because they were satisfied the job didn't warrant more than that.
As you say, "I check everything under my control." It's a great policy and I agree with it. I just emphatically disagree that "experience/training makes those kinds of checks incidental." Experience and training can reduce the amount of time necessary to spend on this overhead while still maintaining your level of professional diligence, but except for trivially small projects I don't see how that overhead can ever be minimized to the level where it can be called incidental.
I share in the spirit of your first law. It's not my first law, but it's pretty high on the list. I usually phrase it as "no crisis ever came from a controlled failure." Software failure is not necessarily a bad thing. There was a plane crash a while ago in South America where a Boeing autopilot scaled back the engines on landing when the altimeter reading dropped abruptly from 2000ft above ground level to -8 feet AGL. If the autopilot software had assumed the altimeter was capable of being batshit insane from time to time and reacted appropriately, the disaster would probably have been avoided. The altimeter's failure was not the source of the crisis; that was the autopilot's inability to control the failure.
no subject
This.
no subject
Win2K was meant for Corporate America, not for home users. They got to wait until 2001, and even then, adoption wasn't really big until mid-2002.
XP is six and a half years old. Long in the tooth, sure — but if you're going to make a mass-market giveaway promotion, you have to consider the remaining 98/ME crowd. Much as I would like it to be otherwise.
no subject
no subject
no subject
No matter what we may want to believe, Windoze is still a single user OS, that just happens to be better at task switching than DOS. Trying to make it something else is just so much wishful thinking. If Micro$oft attempts to rearchitect the system to be more secure or a better client, they will break so many legacy applications that the transition to linux will be automatic. Micro$oft is screwed, they can't fix it, and can't change it. If they weren't so amoral about keeping the transition from happening, I would almost feel sorry for them.
Random Thought
Re: Random Thought
Re: Random Thought