Wednesday, December 19, 2007

Please don't believe anything in this post

http://discuss.joelonsoftware.com/default.asp?joel.3.574487

In Yet Another Internet Forum Discussion About Offshoring (I hereby claim authorship of the acronym YAIFDAO) someone wrote:

A lot of decisions should NOT be left to developers to make. imho, the time to think out of the box is gone by the time it is TIME TO CODE. It's not time to think about alternatives to what to do.
That is absolutely right. You never want developers talking to end users. They might suggest some other plan than what was painstakingly shepherded through four levels of approvals.

And let's just squash the notion right now that sometimes there are trade offs to consider. Just because the analyst's solution will take three weeks of coding effort and a new application server, while the programmer knows of a reusable component that will take one hour and no increased hardware, is no reason to institute the Change Control Process.

Alternatives should always be considered in isolation from the impact they cause. Implementation issues should never be allowed to intrude into the world of business decisions.

Next thing you know someone's going to suggest that maybe mere programmers could have a meaningful contribution to make to the business process. What rubbish.

Thursday, December 13, 2007

How to Kill a Project by Accident

In my last post I talked about perverse incentives in project management. Something I mentioned in passing was what happens when you don't have positive incentives: the cases where there is simply no incentive to do the right thing. I realized there's actually another way to get this wrong by trying too hard to get it right.

Let's say you have just finished a project, gone into production, and it blew up. Data corruption, security problems, too slow, everything that can go wrong with a product. First you do an emergency project to fix it, then you do the after-action review to see what went wrong.

What you find is that you didn't have good test coverage. There were whole modules that were never reviewed before going into production. It's painfully obvious that there was a complete breakdown in the QA process.

Fixing yesterday's problem

You're not going to make that mistake again. You write up policies for demonstrating test coverage. You create reports to track test execution and results. You re-write employee performance expectations to align with the new methodology. (If you've read my stuff before, you should hear the alarm bells start ringing when you see the "m word".)

Your next project is going exactly according to plan. Test coverage is at 90% overall, with 100% of high priority use cases covered. You're on schedule to execute all test cases before acceptance testing starts. Defects are identified and corrected.

Then the users get it. They hate it. It doesn't do anything the way they wanted it to. Not only that, it doesn't even do what they asked for. How could you be so far off?

You don't get what you don't measure

Look back at what incentives you created. You reward doing the methodology, following the checklist. Test coverage is great, but it's not the goal of the project. The goal is to provide something of value to the users ... or at least it should be. Did you include a line in the new process that explicitly says, "Check with the users that it's doing what they need"?

So how do you create the right incentives? Just flip the emphasis. Instead of saying an employee's performance evaluation is 80% following the methodology and 20% client satisfaction, turn the numbers around. Your users don't care that you followed "best practices." They care that the product does what they need. Where is that measured in your methodology?

Thursday, December 6, 2007

How to Fail by Succeeding

Dave Christiansen over at Information Technology Dark Side is talking about perverse incentives in project management, which he defines as:

Any policy, practice, cultural value, or behavior that creates perceived or real obstacles to acting in the best interest of the organization.
One class of these perverse incentives comes from the methodology police. These departments exist to turn all processes into checklists. If only you would follow the checklist, everything will work. But more importantly, if you follow the checklist you can’t be blamed if you fail.

How’s that for perverse?

A great example is that there is rarely any incentive to not spend money. Before you decide I’m out of touch with reality, notice I didn’t say “save money.” I said “not spend money.” Here’s the difference.

IT by the numbers

Let’s say you do internal IT for a company that produces widgets. Someone from Operations says that they need a new application to track defects. If you follow the checklist, you:
  • engage a business analyst, who
  • documents the business requirements, including
  • calculating the Quantifiable Business Objective, then
  • writes a specification, which is
  • inspected for completeness and format, then
  • passed on to an architect, who
  • determines if there is an off-the-shelf solution, or if it needs custom development.
At this point you’re probably several weeks and tens of thousands of dollars into the Analysis Phase of your Software Development Lifecycle. Whose job is it to step in and point out that all Ops needs is a spreadsheet with an input form and some formulas to spit out a weekly report?

Let’s put some numbers to this thing.

Assume the new reporting system will identify production problems. With this new information, Operations can save $100,000 per month. A standard ROI calculation says the project should cost no more than $2.4-million, so that it will pay for itself within two years.

Take 25% of that for hardware costs, and 25% for first-year licensing, you’ve got $1.2-million for labor costs. If people are billed out at $100/hour – and contractors can easily go three to four times that for niche industries – that’s 300 man-weeks of labor. Get ten people on the project – a project manager, two business analysts, four programmers, two testers, one sysadmin – and that’s about seven months.

If everything goes exactly to plan, seven months after the initial request you’re $2.4-million in the hole and you start saving $100,000 per month in reduced production costs. Everyone gets their bonus.

And 31 months after the initial request, assuming nothing has changed, you break even on the investment. Assuming the new system had $0 support costs.

But what if …

Way back at the first step, you gave a programmer a week to come up with a spreadsheet. Maybe the reports aren’t as good as what the large project would have produced. You only enable $50,000 per month in savings. That week to produce it costs you $4,000 in labor, and $0 in hardware and licensing.

You are only able to show half the operational savings, so you don’t get a bonus. You don’t get to put “brought multi-million dollar project in on time and on budget” on your resume.

And 31 months after the initial request, the spreadsheet has enabled over $1.5-million in operational savings.

Monday, May 14, 2007

How to Finish IT Projects Faster with Less Documentation

http://discuss.joelonsoftware.com/default.asp?joel.3.494620

If you’re responsible for running an IT project you want things to be done on time and within budget. So how do you set your schedule and budget? Hopefully you define what you want to accomplish, and then ask the developers how long it’s going to take. If you’re putting out a request for proposal (RFP) you’ll have several different answers to that question. Typically the highest consideration in the selection is the total proposed cost. But really, the total time is a better choice.

Why that's true is based not on ideas about processes, but on ideas about people.

Consultants live and die by billable hours. In the short term, they don't have any incentive to finish their current project any faster. But in the long term, finishing faster should lead to more work as clients come to respect their ability to meet a deadline. If project managers and clients learn to value that behavior, that is.

How it Could Be

Let's look at a production support issue for an example. Production support is completely different from most project work in one very important way: the problem is well defined. Something worked on Monday, it doesn't work on Tuesday. Make it work just like Monday again.

For something with six-figure impact per hour of downtime – and if you think that’s an artificially-high number you've never worked with credit card processing – you don't want a programmer with an impressive resume, dozens of certifications, and decades of experience with your primary programming language. You want Bob, the guy who wrote the system from scratch and demands $1k per hour with a four-hour minimum.

Once you've got Bob and his hand-picked team of support people, you get out of their way and let them work. Status reports might be no more than, "We've found the problem … We've identified the solution … We’re ready to test the fix … It’s live."

How it Is

But when it comes to new development, companies play it "safe" and look for the best qualifications on paper. They hire based on keyword matching and offer rates based on industry standards for a given skill set. They require specific processes and deliverables (pet peeve: when did "deliverable" become a noun?) and status reporting becomes a significant percentage of the total budget.

Why the Difference?

There are two reasons expert teams get away with less formal process than is typical, but I can only prove one of them. The public answer business sponsors tell themselves to justify the exception to "official methodology" is that the experts have worked the methodology for so long that they can follow the same procedures without exhaustively documenting all the steps. And there is some truth to that.

But I suspect the larger reason is that experts get the work done so much faster there just isn't enough time for documentation to build up.

The best athletes make things look easy that most people could not even do. A high jumper might clear six feet without even trying hard. Most people would never come close even with months to try.

The best IT people do the same thing. They complete projects in weeks that other people could never do. As "safe" projects drag on specifications are refined, status reports are produced, contracts are negotiated, updates are requested and provided. Meanwhile another project team has just released to production – so it must have been a small project.

The hard part for the client is to recognize the difference between a project that went smoothly because it was easy, and one that went smoothly because the team made it look easy. But here’s the secret. You don’t really need to recognize the difference.

How to Do Better

The reason you hired someone else to do the work is because you couldn't do it yourself. Which means you can’t accurately judge which projects are actually hard, and which ones just look hard. So don’t judge the project, judge the people.

The people who seem to always be working on small, simple projects – after all, they always go quickly with no major problems – are better at execution. They will be better no matter what the project is.

Thursday, March 29, 2007

A Tale of Two Techies

You've just finished learning MS SQL 4.2 and VB 5 in school. You get a job with a company that has just upgraded to those languages. You get to learn the ins and outs of the languages along with the rest of your co-workers.

Two years later you want to upgrade from your entry-level salary. You know that big raises only come from job hopping, and you see that most of the job ads are for VB6, so you start studying it on your own. You find some cool new things you think would help in your current job and start pushing people to upgrade.

Your tech lead, Bob -- who has been with the company for seven years -- says the current applications are stable and it's not worth the cost to upgrade. The boss listens to Bob instead of you. You say bad things about Bob on an internet forum.

You find a new contract gig working with VB6, and making almost as much as Bob does. Boy, Bob sure is dumb. If he had any balls he'd have jumped already. You start racking up frequent-flyer miles chasing the next gig. Bob evaluates and recommends a new third-party tool that uses VB6. He gets a VB6 class for his whole staff included in the project cost.

Five years later, you're a .NET hired gun and you know which airports have the best frequent-flyer clubs. You've got a fat bank account and all the best buzzwords on your resume.

Bob is still with the same company, but now he's the IT Director. He's not making as much as you, but he's vested and his 401 is looking pretty good. He hasn't touched any code in a couple of years, but has a few long-term employees working for him whose opinions he trusts. He also hasn't answered an after-hours page for a few years.

You meet Bob on a street corner one day and talk about old times, catch up on what's been happening. Suddenly a car jumps the curb and puts you both in the hospital. Oops.

You were smart enough to get good medical insurance, but your income stops since you're not billing hours any more. Bob goes on medical leave. His wife takes his two children out of junior high and comes in to visit. Your pregnant wife comes in to visit. (You spent your twenties traveling, so are just starting your family.)

Three months later, Bob goes back to work part-time while you sit at home, surfing the net, searching for a gig that will give you the flexibility you need to work around your physical therapy.

By the end of the year, your savings are gone. Microsoft has released the Next Big Thing after .NET, and you don't have any work experience with it on your resume. You're applying for maintenance gigs on "legacy" apps -- two-year-old .NET apps written by guys straight out of school, who just left for their first not-entry-level jobs. Maybe in another year or two you'll be able to climb back onto the leading edge.

Bob just accepted an internal transfer to run the division he's been supporting for the last decade. He recommends as his replacement the long-term employee who filled in for him during his absence. The last division head held the job for 15 years until his retirement. Bob could do the same, and retire with a decent pension when he's 60.

Tuesday, February 20, 2007

Principle: Everyone should hit the ground running

Total size of the codebase doesn't matter. Everything a new programmer touches is either shared by other parts of the system or it's not. If it's shared, the new guy probably shouldn't be touching it anyway. If it's not shared, it should be small and self-contained enough to learn it quickly.

Now if you've got a guy in his first month who wants to re-write your DB access class, then you've got a whole different problem than just getting him up to speed. And if he's right about it needing to be re-written, the problem isn't with the new guy.

How to avoid analysis paralysis in the interview

Prepping for a job interview is a little like a politician prepping for a debate. There are certain questions you can count on hearing, and you prepare canned responses to them. In the job interview these would be things like, "Why did you leave your last job?" and "Where do you see yourself in five years?"

Then there are the technical screening questions. They'll focus on your experience and knowledge. Typically you'll get some basic stuff to begin with, just to see if you really did all those things you put on your resume.

Then you start getting to the interesting questions. The ones that don't have a clear "right" answer. Or do they? Does the interviewer have something specific in mind? Will I blow it by not coming up with the right answer?

Take these two technical questions as an example:

1) When or why would you consider using an RDBMS (like MySQL etc) as opposed to a desktop database (sqlite etc)?

2) When creating a function how many parameters would you allow that function to handle?
Programmers tend to enter the field, and stay in it, because they're good at finding right answers. Computers are (mostly) deterministic. The upside for the programmer is that they know with certainty when they've solved a problem. The downside is that they know with frustrating certainty when they haven't. They can't just dress up "because I said so" in reasonable-sounding logic and turn it in. Their program has to actually work.

But the questions above don't have right answers. They aren't intended to see if someone knows how to implement a given solution. They're intended to see if someone knows how to ask the right questions to choose the right solution.

Anyone can look up a linked list implementation, or a quicksort. Most interviewers are more interested in identifying the guy who can figure out which one to use.

For the parameters question above, I'd want somebody to state a basic rule of thumb -- probably something from 3-10 "feels" like a reasonable starting point. But they should also explain their reasoning, which might be along the lines that too many parameters is likely to indicate too much going on in one function. Then possibly present an exceptional case where a high parameter count would be preferred.

Maybe there would be some discussion around passing an array of name/value pairs instead of multiple individual parameters, or named parameters, default values, etc. Or pro/con on passing all the values for an object in the new() declaration, vs. a basic instantiation and then multiple set() calls.

These specifics are not the "one right answer". They're examples of the kind of answer I'd hope to hear. If I asked you the question and you couldn't either float some ideas or ask some reasonable questions, I'd take it as a sign of lack of experience. Interviews are not a multiple-choice test, they're essay format. If the interviewer or the candidate acts like it's multiple-choice, they're wrong.

Trying hard to not get it

If you're interviewing people for entry-level jobs, maybe you do just need to know that the candidate can implement a particular pattern. Once you get past entry level, you'll want to see that the candidate can identify which pattern they should implement. Best is to find a candidate who knows why to choose a particular pattern. Which means they know other patterns and why not to choose one of them instead.

You can make it through a class in school only learning the one technique that you're going to be tested on. That's why entry level people aren't trusted to make choices, just do what they're told. Once you get out of the classroom, you have the freedom to re-invent everything, making the same mistakes everyone has made before you. The more ways you've seen to solve a given problem, the more choices you have. I'm not interested in finding someone who's always going to choose the same solution as me. I want someone who knows the general principles and how to prioritize competing goals.

But some programmers will get nervous, and others downright hostile, if you ask a question without a clear right answer. As far as they're concerned, it's a "bad question" that you shouldn't even ask. Apparently any question where the answer starts with "It depends" is a trick question, and even asking it is an insult.

How you should answer

Remember those courses you took where the instructor would tell you to show your work? Even if the answer was wrong, you could get partial credit for having the right approach. When interviewing it's all about showing the work. The "right" answer is almost incidental.

But sometimes there is a right answer. Doesn't that matter? Absolutely. And knowing whether you're dealing with one of those questions or not is an important skill.

If my goal is to evaluate whether someone knows how to clarify an incomplete requirement, I don't start by telling him, "Now I'm going to give you an incomplete requirement to see if you know to clarify it." I ask a question as though there is a right answer and see what they say.

So maybe you read all of this and think that now you know what I'm looking for. That doesn't help you with someone else who may have a different plan. The good news is that it doesn't matter, as long as you answer truthfully.

If the interviewer is looking for a specific answer and you give multiple alternatives, you may be giving a better answer than what he expected. Or maybe he'll disagree with your reasoning and decide that you are wrong. Do you want to work for someone who will shoot down your ideas?

Or the question was supposed to be open-ended but you give a single definitive answer. A good interviewer will prompt you to explain, maybe even asking about specific alternatives. Did you have a good reason to discount those answers? Explain it. Did you not think about that? Admit it. Maybe you're really not a good fit for this position.

Sure, you need a paycheck. But unless you are desperate you also need a good fit. If you and the person you'll be working for don't see things the same way, you won't get that fit. So instead of trying to overanalyze what the interviewer "really meant", just answer as honestly and completely as you think you can. If you, the interviewer and the position are a match, all that's left is to talk about the money.

Monday, February 19, 2007

Get to the point

Dave Christiansen over at Information Technology Dark Side has a good graphic representing what he calls the FSOP Cycle (Flying by the Seat of Your Pants). The basic idea is that when smart people do good things, someone will try to reproduce their success by doing the same thing that worked the first time.

The problem with trying to do this is that every time someone tries to document a "successful process", they always leave off the first step: get smart people working on the project.

Dave outlines several reasons why capital-P Process will never solve some problems. Joel Spolsky described this same issue in his Hitting the High Notes article when he wrote, "Five Antonio Salieris won't produce Mozart's Requiem. Ever. Not if they work for 100 years."

So if Process can't solve your problem, what will? According to Dave, it's simple:

Put a smart PERSON in the driver's seat, and let them find the way from where you are to where you want to be. It's the only way to get there, because process will never get you there on its own.
I'll assume that when Dave says "smart" he really means "someone good at solving the current problem". I could have the world's smartest accountant and I wouldn't want him to remove my appendix. Okay, I don't want anyone to remove my appendix. But if I needed it done, I'd probably go find a doctor to do it. So what Dave is saying is that you'd rather have someone who's good at solving your current type of problem, than have Joe Random Guy trying to follow some checklist.

This isn't a complete answer, though. Some songs don't have any high notes.

In the middle of describing why you should choose the most powerful programming language, Paul Graham writes:
But plenty of projects are not demanding at all. Most programming probably consists of writing little glue programs, and for little glue programs you can use any language that you're already familiar with and that has good libraries for whatever you need to do.
Trevor Blackwell suggests that while that may be changing, it was still true at least until recently:
Before the rise of the Web, I think only a very small minority of software contained complex algorithms: sorting is about as complex as it got.
If it's true that most programming doesn't require the most powerful language, it seems fair to say most programming doesn't require the best programmers, either.

You might notice at this point (as I just did) that Dave wasn't talking about programming, or at least not only programming. The same principle seems to hold, though: Average people can do the most common things with the most common tools. Exceptional circumstances require exceptional tools and/or exceptional people. If only there were a way to predict when there will be exceptions …

The other problem

But let's say we're looking at genuinely exceptional people who have done great work. Should we ask them how they did it? After all, they're the experts.

Well, not really. They're only experts on what they did, not why it worked. They might have simply guessed right. Even if they didn't think they were guessing.

Need another example of false authority? Have you ever heard someone describe a car accident, and attribute their survival to not wearing a seatbelt?

First, they don't know that. They believe it. They obviously didn't do a controlled experiment where the only difference was the seatbelt. Second, even if they happen to be right in this case, statistics show that it’s much more common for the seatbelt to save you than harm you.

So if you want to design a repeatable process for creating software, you can't do it by asking people who are good at creating software.

The other other problem

One change I'd make in Dave's FSOP diagram is in the circle labeled "Process Becomes Painful". It's not that the Process changes. Really what's happening is that the project runs into some problem that isn't addressed by the current Process.

Every practice you can name was originally designed to solve a specific problem. On very large projects, there's the potential to encounter lots of problems, so extensive Process can simultaneously prevent many of those problems from appearing.

But attempting to prevent every conceivable problem actually causes the problem of too much time spent on the Process instead of the project.

That's where Agile comes in. It solves the problem of too much process. Do you currently suffer from too much process? Then you could incorporate some ideas from Agile.

But make a distinction between using ideas to deal with specific problems, and adopting a whole big-M "Methodology". Once you adopt a Methodology designed to solve the problem of too much Process, you face the danger of having too little Process.

Your expert programmers may be the best in the world at the problem you hired them for. But now you want them to do something different. And you have no way to recognize that they're not making progress, because you've eliminated all the governance that came with the heavyweight Methodology.

So what's the point, anyway?

Process is not something you can measure on a linear scale, having "more process" or "less process". Always adding practices -- even "best" practices -- to your current Process is simply trying to solve all possible problems, whether you're currently having them or not.

For each practice you have to consider what it was designed to solve, what was the point of it to begin with? Then don't treat these practices as a checklist that must be completed every time, but follow the principles behind them.

There's a line in the movie "Dogma" that I think describes this really well. Just substitute "process" for "idea" and "Methodology" for "belief":

You can change an idea. Changing a belief is trickier. Life should be malleable and progressive; working from idea to idea permits that. Beliefs anchor you to certain points and limit growth; new ideas can't generate. Life becomes stagnant.

What conclusion would you like me to reach?

Why is it that the rest of the world has functioned fine on estimating projects of many sizes and scopes, then sticking to them; while IT screams "OH WE can't do that!"?

I'll admit a large reason is because lots of people in IT are really bad at estimating. But part of the reason we've managed to stay so bad at estimating, and the main reason all estimates seem to be so far off, is that the business side wants the estimates to be low.

The common complaint is that IT projects are "always over time and over budget". Since the IT budget (for software) is almost entirely salary, time and budget are synonymous. When you set the budget, you’ve just set the time.

But most IT projects -- and nearly every one I've worked on -- have a budget set before the detailed design is ever done. Or at least the clients have an idea in mind what they'd like it to cost. So without realizing they’re doing so, the clients set the time estimate sometimes before they’ve ever talked to the developers.

I had to help out some less-experienced people who were being asked for estimates. I told them the trick is to figure out a diplomatic way to ask, "What number would you like me to say?"

The funny thing is that this is not just being cynical, either. There's real value to it. If you're thinking four months and the client says three days, there's a good chance you don't have the same idea in mind for what you plan to do.

For instance, they ask you to write a search engine "like Google" to put on your intranet. You're thinking a couple of months. They're thinking end of the week.

It turns out they want you to buy a Google search appliance and integrate it into the intranet. To the client, "write" "create" "implement" "install" and all those other words we use that mean different things ... all mean the same thing: work that the IT guys do.

So if you want to get the work -- or if you’re an employee and have no choice in the matter -- see what number they have in mind already and tell them what they can have in that length of time. If you don't promise to deliver something in the given time frame, they'll go find someone who will. Not that they'll deliver in that time, but they'll promise to deliver in that time.

Compare this to construction. The client may get five quotes for pouring concrete. If four of the bids are close to $50k, but one of them is $20k, the client will likely assume the low bid is unrealistic and choose among the remaining contractors.

But if a programmer or independent software vendor says some work will cost $50k, the client can find someone who will promise to deliver for $20k. The sponsor will either accept the lower bid, or use it to negotiate the first vendor down to $25k. When it ends up costing $50k, that project goes in the books as "over time and over budget".

What would it look like if construction bids were awarded the same way? If for instance the client were required by law to select the lowest bid? Then contractors would low-ball every bid to get the contract. You’d end up with construction that always ran over time and over budget. You would need an endless supply of money to stay in business. You would need to be … the government.

Thursday, February 15, 2007

Installing software is not worth my time

http://discuss.joelonsoftware.com/default.asp?joel.3.451613

My PC is managed according to corporate standards. I can't install anything without an administrator signing on and authorizing it. I asked if I could install the software to link to my cell phone and load my contacts into it. Otherwise I'd be spending a couple of hours over the next week manually entering them all in via the keypad.

The day after I put the ticket in, someone called up and asked where the software was. I told him I had it on a CD. The local support guy came up the next morning, saw that it was an OEM disk and not some random thing I'd burned, and logged in as administrator so I could install it.

Sure, it was two days before I got what I needed, but it wasn't a compelling gotta-have-it-today issue.

Everything else on here is available as a network install, and is set up in my profile. When I get a new PC -- I'm due to be refreshed in the next month or so -- I'll go to a single application, check the boxes for everything I need, and go to lunch. When I come back, everything will be installed.

Every application that I install myself I'd have to reinstall when I get a new PC. How many hours would that take? Multiply that by the number of users in my office, which just relocated earlier this year. Most of us didn't take the hardware from the old location, we just came to blank systems at our new desks and kicked off the install process.

If I'm paying for it, it is not worth my time to install software. If my alternatives are to load all my apps on a new PC I've just bought, or to work a billable hour and pay someone else to do the installs, I'll pay the Geek Squad to click OK and reboot 14 times. Why should I expect the company I work for, who owns the PC I'm working on, to make a different decision?

Friday, February 9, 2007

The program is not the product

http://discuss.joelonsoftware.com/default.asp?joel.3.449657

Managers want programs to be like the output of a factory. Install the right robots and tooling, start the process, and good software comes out the end.

WRONG WRONG WRONG WRONG WRONG WRONG WRONG WRONG WRONG WRONG!!!!!! (Can you tell I disagree?)

Nearly everyone who makes the factory analogy misses a very fundamental point: the program is not the product. For one thing, unless you work for a software company selling shrink-wrapped software products you aren't ever selling the program. Conventional wisdom says most programmers actually work for internal IT, so it's safe to say most programs are never sold.

Businesses don't want programs, they want credit reports ... and loan contracts ... and title searches ... and purchase orders ... and claim forms ...

So what is the right analogy? The program is the assembly line! So measuring bugs in the program is wrong. Even comparing compiling to manufacturing -- which is better than comparing programming to manufacturing -- is wrong. What you should be looking at is the output produced by the program.

Is your program a website? Measure the number of pages it can produce per unit time, without error. Is it an editor? Measure the number of pages it can spell-check per unit time, and with what accuracy.

When measuring the output of a manufacturing process, you literally shouldn't care what the process looks like, nor what tools are used, so long as it consistently produces the same output. This is not to say the process and tools don't matter. A bad process may be prone to unexpected failure. Tools may be harder to maintain or have a shorter service life. You may be locked into a service contract with the manufacturer. And coincidentally [ahem] all these factors apply to software.

So yes, programming can be compared to manufacturing. As long as you remember that the program is not the product, the program is the assembly line.

Can the FSF "Ban" Novell from selling Linux?

http://discuss.joelonsoftware.com/default.asp?joel.3.447941

Novell Could Be Banned From Selling Linux: Group Claims

BOSTON - The Free Software Foundation is reviewing Novell Inc.'s right to sell new versions of Linux operating system software after the open-source community criticized Novell for teaming up with Microsoft Corp.
The problem is that the FSF wants all code to be free. Period.

That's their preference, yes.

They want to make the GPL so darned viral that no one can include any copyrighted or patented components Period.

No, they want all the components on which they hold the copyrights to be protected by those copyrights. And they want those components to be freely available to anyone who agrees to make their modifications available under the same terms.

You can't modify and distribute Microsoft's code without permission. You can't modify and distribute GPL code without permission.

The way you get permission to distribute Microsoft's code is to pay them a lot of money, or cross-license your own code. They way you get permission to distribute GPL code is to release your modifications under the GPL.

Microsoft can destroy your business model by bundling a version of what you make. GPL-using authors can destroy your business model by releasing a free version of what you make.

If you don't want to be bound by Microsoft's terms, write your own code. If you don't want to be bound by the GPL, write your own code.

How is GPL viral while Microsoft is business?

How can the FSF "ban" Novel from selling "Linux" when Linux itself is not wholely licensed under the GPL and not wholely owned by FSF? Sure, there are many GPL components within the typical Linux distro, but not all of them have to be.

According to Answers.com:
More Than a Gigabuck: Estimating GNU/Linux's Size, a 2001 study of Red Hat Linux 7.1, found that this distribution contained 30 million source lines of code. ... Slightly over half of all lines of code were licensed under the GPL. The Linux
kernel was 2.4 million lines of code, or 8% of the total.
So the first point is that no, the FSF can not ban Novell from selling a GNU/Linux-based distribution, as long as all the current license terms are followed.

However, the holder of the Linux trademark, Linux Torvalds, could choose to prohibit them from using that mark to describe what they're selling. (See Micosoft / Sun / Java™) Though I haven't seen anything suggesting he plans to do so.

Next, the Linux kernel is covered under the GPL, so even if the the FSF doesn't hold the copyright it's entirely possible the kernel authors could ask the FSF to pursue any violations on their behalf. And I suspect Stallman and Moglen would be more than happy to do so.

The bottom line, I think, is that business people who don't understand the technicalities will either see a deal with Microsoft as a reason to choose Novell for any Linux plans, or they will see the controversy as a reason to avoid Linux plans altogether. Either conclusion benefits Microsoft.

People who do understand the details will see that Novell offers them a conditional, time-limited right to use a specific version of Linux, which may or may not interoperate better with Windows systems, which can be effectively "end-of-lifed" at any time by Microsoft.

Friday, February 2, 2007

And this is bad why?

http://www.infoworld.com/article/07/01/29/05OPopenent_1.html

If you try hard enough, I suppose it's possible to spin anything into an attack on your pet target. But the consistency with which Neil McAllister sounds the call of doom and gloom for all things open source is really quite astonishing. Especially when you consider he writes the Open Enterprise column for Infoworld.

Take his January 29th column about the formation of the Linux Foundation for example:

On the surface, the union of Open Source Development Labs (OSDL) and the Free Standards Group (FSG) seems like a natural fit. Open standards and open source software are two great ideas that go great together.

But wouldn't it make more sense to call the merged organization the Open Source and Standards Lab, or the Free Software and Standards Group? Why did they have to go and call it the Linux Foundation?

On the one hand, it seems a shame that the group should narrow the scope of its activities to focus on a single project. Linux may be the open source poster child du jour, but it's hardly the only worthwhile project around.

If Neil had bothered to read his own magazine's newsletter the previous week, he would have known that:

With Linux now an established operating system presence for embedded, desktop and server systems, the primary evangelizing mission that the OSDL and FSG embarked upon in 2000 has come to an end, Zemlin said. The focus for the foundation going forward is on what the organization can do to help the Linux community more effectively compete with its primary operating system rival Microsoft.

The combination of the two Linux consortiums was "inevitable," said Michael Goulde, senior analyst with Forrester Research. "The challenge Linux faces is the same one Unix faced and failed -- how to become a single standard."

So what's wrong with focusing on Linux, anyway?

But then again, maybe it's not so strange -- not if you conclude that the Linux Foundation isn't any kind of philanthropic foundation at all. It's an industry trade organization, the likes of which we've seen countless times before. Judging by its charter, its true goal is little more than plain, old-fashioned corporate marketing.

As such, the Linux Foundation is a unique kind of hybrid organization, all right -- but it's not the union of open source and open standards that make it one. Rather, it stands as an example of how to combine open source with all the worst aspects of the proprietary commercial software industry. How noble.

This is really amazing. No one ever claimed that the partners in this merger were anything other than industry trade organizations, but the fact that the new foundation will continue the work of it's members is somehow un-noble. And nobility is the standard by which we should judge those who are trying to make Linux more competitive in the market.

His grammar and spelling may be better than that of the stereotypical Linux fanboys, who famously attack less-rabid supporters for their lack of purity. Or maybe he just has a better editor. But all the craft in the world doesn't disguise the fact that Neil's opinions are rarely more useful than the ramblings of an anonymous Usenet troll.

Thursday, January 25, 2007

Changing your development platform

http://discuss.joelonsoftware.com/default.asp?joel.3.443537

There are certain milestones in the life of a product when developers are free to ask if it’s time to change the platform it’s developed on. Typically you’ve shipped a major version and gone into maintenance mode. Planning has started for the next version, and you wonder if you should stick with what you’ve got or if, knowing what you know now, it might be better to switch from .NET to PHP, or from PHP to Java.

You might think that checking Netcraft would be a good idea. You can see if your current platform is gaining or losing market share, and who doesn’t like market share? If you look at the latest chart you’ll see that Microsoft is gaining on Apache.

But keep in mind that while Apache's market share has gone down marginally, the total number of sites has still gone up. Most of Microsoft's gain is from new sites, not from existing sites switching. (The exception being large site-parking operations switching to IIS.)

But really the important question is whether your preferred platform faces a reasonable possibility of becoming obsolete/unsupported. This is actually one place where the Unix world's slower upgrade cycles help. You rarely have applications "sunsetted" by the manufacturer.

Am I arguing in favor of dropping .NET? Not at all. I think you should use what works for you. What I'm saying is unless your chosen platform is in danger of becoming unsupported, and that causes a problem for you, then looking at market share charts should never get you to switch.

Now if you hadn't already chosen a platform, and you wanted to know what platform had a larger market, then you'd care about market share. But that's a subject for another post.

Tuesday, January 16, 2007

Geeks still don't know what normal people want

If you listen to geeks, locking out development of third-party applications will doom the iPhone in the market. But remember the now-famous review when the iPod was released:

No wireless. Less space than a nomad. Lame.
The market quickly decided they didn't care about wireless and bought the things in droves. And current versions have more space than the nomad did when the iPod came out. Now that the iPhone has been shown, geeks are again claiming that it's going to fail. This time because it's not going to be open to third-party applications.

Apple doesn't care if you can extend it because they believe their target customer doesn't want it extended. They want something that works well, the same way, every time. The iPod wins because it does pretty much what people want, close enough to how they want, without making them think about how to do it.

The iPhone may not be open to developers, but it's upgradable. When Apple finishes writing software to make the Wi-Fi automatically pick up a hotspot and act as a VoIP phone, that functionality can be rolled out transparently. First-gen iPhones will become second-gen iPhones without the users having to do anything.

The upgrade path will be to higher HD capacity, so people can carry more movies with them. I see these things as hugely popular for people who take trains to work. If I could take a train where I work now, I'd already be on a waiting list for an iPhone.

Thursday, January 11, 2007

Principles: Deployment

  • Code that only runs on the developer's workstation isn't finished.
  • Any programmer that can't be bothered with ensuring his code can be deployed is only doing half his job.
  • Being responsible for the integration issues caused by your code can be a real eye-opener.

Meet the new boss, same as the old boss v2

Sometimes you read something that you can't summarize without losing a lot. I just can't find any extra words in this post, so here it is in its entirety:

1) Whatever language is currently popular will be the target of dislike for novel and marginal languages.

2) Substitute technology or methodology for language in #1. In the case of methodology, it seems a straw man suffices.

3) Advocates will point to the success of toy projects to support claims for their language/methodology/technology (LMT).

4) Eventually either scale matters or nothing matters. Success brings scale. An LMT is worthy of consideration only after proving out at scale.

5) Feature velocity matters in early stage Web 2.0 startups with hyperbolic time to market, but that is only a popular topic on the Web for the same reason Hollywood loves to hand out Oscars.

6) Industry success brings baggage. Purity is the sign of an unpopular LMT. The volume of participants alone will otherwise muddy the water.

7) Popularity invites scrutiny. Being unfairly blamed for project failure signals a maturing LMT; unfairly claiming success, immature LMT. Advocates rarely spend much time differentiating success factors.

8) You can tell whether a LMT is mature by whether it is easier to find a practitioner or a consultant. Or by whether there is more software written *with* or prose written *about* the LMT.

9) If you stick around the industry long enough, the tech refresh cycle will repeat with different terminology and personalities. The neophytes trying to make their bones will accuse the old guard of being unable to adapt, when really we just don't want to stay on this treadmill. That's why making statements like "Java is the new COBOL" are ironic; given time, "N+1 is the new N" for all values of N. It's the same playbook, every time -- but as Harlan Ellison said of fiction, every story has already been told, but nobody was listening the first time.

10) Per #9, I could have written this same post, with little alteration, ten, twenty or thirty years ago. It seems to take ten years of practise to truly understand the value of any LMT. Early adopters do play the important role of exploring all the dead ends and limitations, at their cost. It's cheaper to watch other people fail, just like it hurts less to watch other people get injured.

11) Lisp is older than I am. There's a big difference between novel and marginal, although the marginal LMTs try to appear novel by inserting themselves into every tech refresh cycle. Disco will rise again!

12) If an LMT is truly essential, learning it is eventually involuntary. Early adopters assume high risks; on the plus side they generate a lot of fodder for blogs, books, courses and conferences.

13) I wonder if I can get rich writing a book called Agile Lisp for Web 2.0 SOA. At least the consulting and course revenue would be sweet. Maybe I can buy an island. Or at least afford the mortgage payments on a small semi-detached bungalow in the Bay area.

14) It requires support from a major industry player to bootstrap any novel LMT into popularity. The marginal LMTs often are good or even great, but lack sponsors.

15) C/C++ remain fundamental for historical reasons. C is a good compromise between portability and performance -- in fact, a C compiler creates more optimal code than humans on modern machine architectures. Even if not using C/C++ for implementation, most advocates of new languages must at least acknowledge how much heavy lifting C/C++ does for them.

16) Ditto with Agile and every preceding iterative methodology. Winding the clock back to waterfall is cheating. I'm more sophisticated than a neanderthal, but that won't work as a pick up line.

17) Per #13, I don't think so, because writing this post was already a chore, let alone expanding the material to book length. Me an Yegge both need a good editor.


This covers the technology pretty well. All he left out was the reason so much is coming back.

Get your boxes in order

Everyone seems to have an opinion on downloading music and TV shows, everything from "Information wants to be free" to "Skipping commercials with your TIVO is theft." Some of the views are self-serving, some are rationalizations, and some people have strong opinions based on what they believe is right and just.

Here's the thing a lot of people are missing, though: Breaking the law does not count as civil disobedience unless you go out of your way to do it publicly. Obviously I'm referring to people who upload and download music, movies or software without permission from the copyright holders. Some of them are just in it for the free tunes. Some of them think the law is wrong. But the ones who believe copyright laws have gone too far damage their case when they quietly violate the law, expecting to protest the law if they are caught.

Think the law has tilted too far in favor of the copyright industry? Great, so do I. Have you written to your congressman? If not, then don't complain about the law when you get busted. It makes it look like you're just trying to stay out of jail -- which you are -- and supports the MPAA and RIAA next time they try to get copyright extended.

Before you end up in a jury box, you should really try the ballot box. Time for me to get off my soapbox.

What is Steve Jobs thinking?

http://apple.com/iphone

We all knew Cisco had the trademark on the name. According to their press release:

"Cisco entered into negotiations with Apple in good faith after Apple repeatedly asked permission to use Cisco's iPhone name," said Mark Chandler, senior vice president and general counsel, Cisco. "There is no doubt that Apple's new phone is very exciting, but they should not be using our trademark without our permission."
They negotiated, Cisco said no. So they release it anyway. And Apple's response is:

Apple responded by saying the lawsuit was "silly" and that Cisco's trademark registration was "tenuous at best".

"We think Cisco's trademark lawsuit is silly," Apple spokesman Alan Hely said. "There are already several companies using the name iPhone for Voice Over Internet Protocol (VOIP) products."
It's "silly"? Come on, that sounds like they're daring Cisco to take it to court. And claiming that the trademark has already been diluted by other products is a dangerous game. If that argument prevails, then Apple will have no standing to prevent anyone else from releasing their own iPhone.

What the hell are they thinking?


[Update]

See the Joel on Software forums for some discussion of this.

Wednesday, January 10, 2007

Pay the man

http://discuss.joelonsoftware.com/default.asp?joel.3.434525

IT people are frequently highly-educated, with extensive formal and on-the-job training. And we all, if you look at our resumés, think that we're fast learners. That's probably because everything we work with keeps changing every couple of years, so anyone who's been doing this for very long has learned multiple generations of tools. Many of our jobs also require us to be generalists, with a broad range of knowledge across multiple unrelated fields.

It's probably not surprising, then, that we tend to be DIYers. Never changed a light fixture? No problem. Give me a few minutes with a book and I'll know enough to do it. House needs painting? Heck, I've always wanted an excuse to go get one of those power sprayers, I'm on it! That's why we're shocked to hear how much people pay to have someone do work that, after all, we could do ourselves with little or no training.

That was my frame of mind when I had to replace the shower door. The frame was mounted on tiled walls. I only cracked two of the tiles a little bit trying to get the old frame off, and lifted about a dozen away from the wall. No problem, just ran to the hardware store for some tile adhesive. And I only put the adhesive on a little too thick, so two of the tiles fell off the next day when I started mounting the frame. And I only cracked one more because I was unfamiliar with the mounting hardware.

I had to remove all the tiles and start over because the adhesive was actually nowhere near dry. I wanted to make sure it dried all the way, because I wasn't completely sure I did it right this time. When I tried again three days later, there was only one tile that fell off because I had gone too thin with the adhesive. But after waiting a day for the grout on the rest to dry, I was able to scrape that space out and get the last tile up and grout it. The caulk and grout I used to patch the cracks looks mostly okay ... for now ... while it's still white

All in all, it only took me a week and a half to hang that door. And the cracked and patched tiles will probably still look good when I go to sell the house. (At least I hope they will; the color was discontinued years ago, so I'd have to re-tile the whole damn bathroom otherwise.) I'm so glad I didn't pay a hundred bucks to some barely-trained tradesman to do it for me.

Lipstick on a pig

If you've ever seen one of my project plans, there's a chance you've seen a task at the end that says "Add pretty". With good use of stylesheets, you can radically improve -- or damage -- the look of a website even after all the coding and most of the testing are done. A different person or group with a different skill set can take over from the programmers and work some magic with little interaction.

You might think, based on this, that other parts of development can be pushed to the end after "real" development is done. You'll know someone was thinking that when you see a task late in a project plan that says "Add fast".

I suppose I can live with the idea that there will be some performance tuning that's best done once everything else is complete. And on some projects just throwing more hardware at the problem is cheaper than a programmer's time to fix it. But actually improving the performance of an application is hard, and the changes pervasive.

The second manifestation of specialized groups, one that always raises the brown flag, is when I see "Add security" at the end of a plan. It's simply inexperience that allows anyone to think they can graft a security model onto a codebase after the fact without significant amounts of rewriting.

"But this is a quick hack, and we only need the numbers for this one meeting." Sure, a report you'll only ever need once. I guess such a thing could exist, but I've never seen it. In the first place, nothing lasts as long as a temporary fix that works well enough. And in the second place, many (most?) large, successful products started out as small, successful products.

End/begin dependencies look really great on a Gantt chart. Activities that invite and incorporate feedback don't look so neat and clean. Treating security as something that can happen to a product after it's already done is no better than ... well, see the title of this post.

Design = function + aesthetics

Ask your local programmer if he knows how to design user interfaces and invariably he'll say he does. Go ahead, ask. I'll wait.

...

You're back? Good. Now go look at the new iPhone. Has your guy ever made anything remotely that cool? Unless you're reading this from Cupertino, odds are he hasn't. The UI is more beautiful and, as near as I can tell from the demo movies, more usable than any other phone or music player I've seen. But I wonder, how much of the perceived usability is a response to the beauty?

It's becoming conventional wisdom that you don't want to make the demo look done. Excessive visual polish early in the process not only limits the feedback you get to comments about the superficial details, it also suggests equally finished interaction with the system. It literally makes it look like it's doing more than it really is doing.

I've avoided this problem in my career by not being very good at graphics, and avoided realizing that by not working with any real visual artists to compare my work to. Yes, I used to think I was good at it, just like every programmer. Eventually I realized that consistency and predictability were a poor subset of what an artist can add.

Now, whenever I make up a project plan, there is a task at the end for "Add Pretty". And my name isn't on that task.

Tuesday, January 9, 2007

The Digital Dark Ages

I've been paying my mortgage for about three years now. Unless I change something, I'm going to keep paying on it for another 27 years. I try not to think about the fact that although I have an actual physical copy of the mortgage agreement, with real pen-and-ink signatures, I don't have any proof that I've ever made a payment.

At the risk of sounding like a Luddite, it bothers me that I have to trust the bank's computer system to keep track of all 360 payments I'll have made by the time it's over. I'm not just being paranoid. I had an issue where a bank said my wife still owed money on a loan we had paid off three years earlier. We didn't have anything in writing for each payment. The bank couldn't even tell us the history of the loan; just that the computer showed we still owed money. And if a bank says you owe money, unless your lawyers are bigger than their lawyers, then you owe them money.

If you go to museums, you'll see ledgers from banks in the 1800s and earlier. Over two hundred years later and we still know who paid their bills and when. But five years in the past ... it doesn't exist.

This could change with new regulations and retention requirements. But the big difference is what is standard vs. what you have to work at. A hundred years ago everything was written down. If you wanted to get rid of records you had to make an effort to identify what you wanted to delete, somehow separate it from the rest, and physically destroy it. Today, we only keep data as long as we have to. We only bother with long-term storage when the law or financial necessity makes us.

Let's assume we have some data that we really want to keep "forever". What is that going to take?

First, you'll want to store it on something that doesn't degrade quickly. Burning it to a CD or DVD seems to offer better longevity than VHS. Well, maybe. Second, you want to store it in a format that you'll be able to read when you want to. This might be a harder problem than the physical longevity, when you start to consider how much data goes into a modern file format.

Look at the problem from the user's perspective: The document format (the same applies to music and video) is just a way of saving the document in a way that it can be opened and look the same way at a later time, maybe on the same computer maybe not. When Windows 97 handles table formatting and text reflow around images a certain way for instance, the document format has a way of capturing the choices the user made.

If I open that Word 97 document in Word 2003, either the tables, text and images look the same or they don't. If they look the same, it's because there's an import filter that understands what the old format means, and Word 2003 has a way of representing the same layout. If I then save as Word 2003, while the specific way to represent the layout has changed, the user doesn't see the difference nor care.

If, on the other hand, that Word 97 document doesn't look the same in Word 2003, it really doesn't matter to the user if problem is a bad import filter or if Word 2003 doesn't support the same features from Word 97. (Maybe they used flame text.) So a format that technically captures all the information needed to exactly recreate a document is utterly useless without something that can render it the same way.

Okay, so we need long-term media, and we need to choose a format that is popular enough that there will still be import filters for it in the foreseeable future. Eventually we'll still reach the end of those paths. Either the disks will degrade, or the file format will be so out of date that no one makes import filters any more. When that happens, the only way to keep our data will be to copy it to new media, and potentially in a new format.

What should that format look like? We've already got PDF, which is based on how something looks in print. We've got various audio and video formats, which deal with playing an uninterrupted stream. But what about interactive/animated documents designed for online viewing?

Believe it or not, I'm going to suggest a Microsoft solution, though it's one they haven't thought to apply this way: PowerPoint. Today nearly everyone has a viewer, but not so long ago most of the slideshows I got were executables. If you had PowerPoint installed you could open the executable and edit the slideshow the same way you can edit a PDF if you have Acrobat.

As much as people complain about the bloat that Word adds to simple files, I think the future of file distribution will be to package the viewer along with the file. At some point storage becomes cheaper than the hassle of constantly updating all those obsolete file formats. The only question is how low a level the viewers will be written to: OS family, processor architecture, anything that runs C, etc.

Monday, January 8, 2007

Meet the new boss, same as the old boss

In case you haven't noticed yet, we're going through another round of power struggles in the IT industry. Oh, that might not look like what's going on. On the surface what people are saying is that it's a matter of web-based vs. desktop applications. Frequently these conversations are based on the premise that it's a discussion of the technical merits.

Nope. It's the return of the glass house. Peel back all the rationalizations about easier deployment, easier support, more consistency, and what it really comes down to is more control. If we can just keep the software out of the users' hands then everything will be okay.

But what history shows us is that users like having control of their "stuff". Taking that control away requires either redefining "their stuff" to be "our stuff", or convincing them that they aren't qualified to handle their stuff.

Is this what your customers are hearing from you?

Sunday, January 7, 2007

The War on Laundry™

Let's see:

  • Not a finite thing that can be destroyed, nor group which can be defeated.
  • No one qualified to declare surrender for it.
  • There are better and worse ways to deal with it, none of which are able to completely eliminate it.
  • No matter how much you fight it, there will always be more soon.
  • No one really likes it, but the only way to avoid it is to change your lifestyle so profoundly that the alternative is worse.
Hmm, sounds about right.


Any relation to other Wars on Nouns are completely intentional.

Friday, January 5, 2007

The day I got a lot smarter

One sign of intelligence is the ability to learn from your mistakes. An even better sign is the ability to learn from someone else's mistakes. Unfortunately, we don't always have the luxury of watching someone else learn a valuable lesson, and we have to do it ourselves. But if we pay attention, sometimes we get to learn multiple lessons from one mistake. (Lucky us.)

Case in point: Dealing with a crisis. I was managing a group of web developers, and the project lead on an integration with our largest client was going on vacation. He assured me his backup was fully trained, and would be able to deal with any issues. He left on Friday, and we deployed some new code on Monday. Everything looked good.

On Wednesday at about 4 p.m., we got a call asking about an order. We couldn't find it in our system. From what we could tell, the branch that placed the order wasn't set up to use our system yet, so we shouldn't have the order. At 5 I let the backup go home for the day while I worked on writing up what we'd found. I sent an internal email explaining what I believed had happened. I said that I would call the client and explain why we didn't have the order, and that they should check their old system.

While double-checking the deployment plan, I discovered that the new branch actually was on our new system ... as of that Monday. That's part of what was included in the new code. That's when I got the shiver down my spine. By that time the backup, whose house was conveniently in a patch of bad cell coverage, was gone. The lead was on vacation. "Okay," I thought, "I've seen most of this code, in fact I've written a good bit of it. I can figure this out."

Stop laughing. It sounded good at the time.

To make a long story short (Too late!) we hadn't been accepting orders for three days from several branches, but had been returning confirmations for them. It was somewhere around 3 a.m. when I finally thought I knew exactly how many orders we had dropped, though I hadn't found the actual bug in the code yet. I created a spreadsheet with the list of affected orders. At one point I used Excel's drag-to-copy feature to fill a range of cells with the branch number for a set of orders.

Did you know Excel will automatically increment a number if you drag to copy? Yes, I know it too. At 11:30 in the morning today I know it. At 3 a.m. that night I apparently didn't know that. So I sent it to the client with non-existent branch numbers that I didn't double-check. "Oops" apparently doesn't quite cover it.

The next morning on a conference call with the client, my boss, his boss, and several other people, we were going over the spreadsheet when someone noticed the problem. To me, it seemed obvious that it was a simple cut-and-paste error on the spreadsheet. But someone -- a co-worker, believe it or not -- decided to ask, "Are you sure? Because I don't see those other two branches on here either." After dumbly admitting that I didn't know anything about any other two branches, I ended the call so I could go figure out what was happening.

Now I had apparently demonstrated that I didn't actually know what was wrong, that I had no idea of the scope of it, and that I was trying to cover it up. Yay me. We called in the lead (whose vacation was at home doing renovations) and started going through the code. I finally found the cause of the error, and it caused exactly the list of errors that I had sent out early that morning, except for the cut-and-paste error. The "other two branches" turned out to be from the previous night's email, where I had specifically said those branches were not affected by the problem.

Within two hours, we had the code fixed and all the orders recovered. So everyone's happy, right? If you think so, then you haven't yet learned the lessons I did that day.

  1. No matter how urgently someone says they need an answer, the wrong answer won't help.

  2. If it looks like the wrong answer, it might as well be the wrong answer. This doesn't mean counter-intuitive answers can't be right. It means that presentation and the ability to support your conclusion count.

  3. If you didn't create the problem, always give the person who did the first chance to fix it.

  4. If someone knows more about a topic than you do, have them check your work.

  5. Don't make important decisions on too little sleep.

  6. Before making a presentation to a client, review the materials with your co-workers.

  7. Don't make important changes when key people are unavailable.

Looking at that list, I realize I already knew several of those lessons. So why did it take that incident to "learn" them? Because there's a difference between knowing something, and believing it.

Wednesday, January 3, 2007

When design is not design

"How is software production like the car industry?"

Oh no, not again. Yeah, well, most people are getting it wrong. So here's another shot at it.

There are aspects of car design that strictly deal with measurable quality: performance of the electrical system, horsepower, fuel economy, reliability. But the shape and style of the car are much more loosely coupled to hard-and-fast measurements. That facet of the design -- the way it looks, the demographic it will appeal to -- is not amenable to Six Sigma processes.

Granted, there are some cars that are strictly (or nearly so) utilitarian. Some people only care about efficiency and reliability. They buy Corollas by the boatload. But the FJ Cruiser is not the result of a logical, statistical analysis, with high conformance to the mean and low variation of anything.

I think what I'm trying to say is that marketing design is building the right thing, while production design is building the thing right. The auto industry is mature enough that you need both. Success in the software industry still relies more on building the right thing.

There are no IT projects ... mostly

http://www.issurvivor.com/ArticlesDetail.asp?ID=556

Whenever someone says something I've been thinking or saying for a while, it's clear evidence of how smart they are. (Don't laugh, you think so too.) So when Bob Lewis published the KJR Manifesto - Core Principles, he confirmed his intelligence when he wrote:

There are no IT projects. Projects are about changing and improving the business or what's the point?
The variation that I've been telling people for years is that people don't want software, they want the things they do with the software. So if you're working on an IT project and can't explain the benefits in terms that matter to the business, you probably shouldn't be doing the project. Then in the middle of making this point to someone, I realized it's not always true.

The one case I thought of was a steel manufacturer that I interviewed with. While the factory was computer-controlled, the people who worked on those systems were in Engineering. The non-production computer system -- email, financials, advertising, etc. -- was IT. In that case, IT really was a support function, no more important to the company than telecom.

That doesn't mean it was unimportant. They could no more survive without their back-office system than they could do without phones. But that system really had no bearing on how they ran their business. It was something that was expected to Just Work™, like the electricity or plumbing.

The thing I don't know is if this is the exception that proves the rule, or if it's more common than I thought to find a place where IT really isn't a strategic partner in the business.

Tuesday, January 2, 2007

Maybe I'm the one missing something

http://www.infoworld.com/article/06/12/11/50OPopenent_1.html

Magicians make a living at misdirection, getting you to look at their right hand while they're hiding the ball with their left hand. You'd think journalists would want to be a little more direct than that. But Neil McAllister did a whopper of a slight-of-hand recently, using more than half his column to summarize a Joel Spolsky post before jumping to a completely unrelated conclusion.

Joel's point, and the first more-than-half of Neil's summary, was shooting down the idea beloved of suits that programming can be reduced to a set of building blocks that can be snapped together by a non-programmer. (For a hysterically painful example of how wrong this is, and how far people will go to try to do it anyway, see The Customer-Friendly System at The Daily WTF.)

Joel covered the ground pretty well, so I was wondering where Neil was going with this. Once I got to it, I had to re-read the segue three times to see what connection I was missing:

Don't you believe it. If, as Brooks wrote, the hard part of software development is the initial design, then no amount of radical workflows or agile development methods will get a struggling project out the door, any more than the latest GUI rapid-development toolkit will.

And neither will open source. Too often, commercial software companies decide to turn over their orphaned software to "the community" -- if such a thing exists -- in the naïve belief that open source will be a miracle cure to get a flagging project back on track. This is just another fallacy, as history demonstrates.
If there's a fundamental connection between open source and "Lego programming" I don't know about it. Maybe Neil makes the connection for us:

As Jamie Zawinski recounts, the resulting decision to rewrite [Netscape's] rendering engine from scratch derailed the project anywhere from six to ten months.
Which, as far as I can see, has nothing to do with the fact that it was open source. In fact it seems more like what Lotus did when they delayed 1-2-3 for 16 months while they rewrote it to fit in 640k, by which time Microsoft had taken the market with Excel. Actually that's another point that Joel made, sooner and better.

Is Neil trying to say that Lego programming assumes that code can be interchangeable, and man-month scheduling assumes that programmers are interchangeable? Maybe, and that's even an interesting idea. But that's not what he said, and if I flesh out the idea it won't be in the context of a critique of someone else's work.

Or maybe it was an opportunity to take a shot at the idea of "the community". Although in his very next column he talks about the year ahead for the open source community, negative community reaction to the Novell/Microsoft deal, and praise from the community for Sun open-sourcing Java. Does he really dispute the existence of a community, or was it hit bait?

Okay, so where did I start? Right, with misdirection. So the formula seems to be: quote a better columnist making a point that I like, completely change the subject with the word "therefore", summarize another author making my second point, and send it to InfoWorld. Am I ready to be a "real" pundit yet?