One thing that struck me talking to people after this month’s DemoCamp is the vast chasm of testing effort (or at least testing thought) of an enterprise application and what I will label a “Web 2.0” application.

Someday I hope to not have to start a post with definitions, but today is clearly not the day. Enterprise applications are designed to be consumed by corporations, and Web 2.0 applications could also be consumed by corporations, but are by and large aimed at more of a mass market appeal. Contrast for instance the testing effort of something like Select Access, an enterprise authentication and authorization product to that of BubbleShare, a very cool (and local) photo sharing site.

A typical release of Select Access spends about 4 – 5 months in test. While I do not know for certain, I have a fairly strong hunch that BubbleShare does not spend that much time there. Enterprise software also has to care more about a larger selection of things. For the most part, Web 2.0 apps do not have to be concerned about regulatory and compliance issues, nor those of corporate intellectual property. Another area I have seen lacking in a lot of Web 2.0 style applications is internationalization and localization. I would be amazed if most of the products shown at DemoCamp have support for either of these, but those of us playing in the enterprise space take this sort of thing for granted. A product will just Do The Right Thing with regards to i18n and l10n.

Joey deVilla had a great cartoon a while back that showed the difference between Web 1.0 and 2.0. The difference was that in 1.0, everything was “Under Construction” whereas in 2.0 everything is “Beta”*. Good luck running a 2 year beta in the enterprise space. It would never fly. This leads to testing in the enterprise space to be proactive, whereas in the Web 2.0 world it tends to be more reactive. “Oops. There is a bug. I’ll fix it and push it out onto the server” seems to be a common maintenance practice. I’ve heard of some companies whose product versioning method is the data and time down to the minute. This is crazy I think — but apparently it works for them. To borrow a term from the Buddhists, the middle way is likely the best approach. Push code out to the servers often, but I think at minimum the pushes should be a week apart. This lets test have at the code for a few days and gets the developers inadvertently doing integration testing.

I am by no means saying that the enterprise people have things right, or that the Web 2.0 people do or do not. I just thought it an interesting observation how different the approaches to testing are.

* Sometimes there is a valid business reason for slapping Beta on a site. I read once that Google News was in beta forever because as soon as they made it non-beta they have to pay for a lot of their content as it would become a commercially distributed product. Do no evil clearly doesn’t cover doing trickiness.