post

Web 2.0 Infrastructure: No longer a “dirty little secret”

One primary cause of the dotcom crash was caused by a tsunami of content and offerings that — at the time — were being consumed by thirsty consumers who were sipping it through a straw (i.e., a modem) and couldn’t possibly access all that good stuff. Forgetting the infrastructure lessons of the crash (or just not talking about them) is the dirty little secret of Web 2.0.

My last day at Web 2.0 in San Francisco, I wrote a post called, “Web 2.0 Conference: The Dirty Little Secret” about the surprising lack of discussion about the scaling that would be demanded of startups offering web hosted applications. This scaling is not just more servers and data center bandwidth…but scaling that includes dealing with latency over the internet.

Just read both Jeremy Wright and Om Malik‘s posts about this exact issue.

Wright says, “Maybe I’m just spoiled, having worked in high performance, high availability apps before, but it constantly astounds me what some folk consider ‘scaleable’ and ‘available’ applications.” Having just lived through the Typepad scalability hiccup (and I must admit being very impressed by how they handled it), this is just one example of a high profile “Web 2.0″ company that just experienced the lack of infrastructure and its negative effect on their users.

Malik says, “However, the lack of planning for scale is a clear sign that we are living in a “built to flip” age. No one, is thinking (or planning) about long term business models!

It’s not just servers and bandwidth that are required for scale. It’s dealing with latency over an increasingly fragmented and geographically disbursed base of people consuming web applications. As I mentioned in the post linked to above, “This (latency) is a technical problem. Imagine you have a portal that is “consuming” web services from a bunch of different sites. You’ve undoubtedly experienced ONE web service in the past (DoubleClick ads) where the web page “hung” (didn’t parse) waiting for the DoubleClick service to deliver the ad. Now imagine that your blog or web page is grabbing photos, catalog items, maybe audio or video, blogrolls, calendars ALL from different web services, and you end up with one incredibly horrible user experience!

There’s a reason, for example, why Akamai exists and why they offer *both* tagged media files *and* transaction/session management: it’s all geared toward a balanced, relatively equal experience for users wherever they reside on the planet.

I use Typepad, Newsgator, Gmail, Audioblog, and a plethora of other hosted offerings. I must admit spending a lot of time waiting for the data to travel the client-to-server-to-client round trip.

Marc Canter and JD Lasica hosted a panel discussion about Open Infrastructure at Web 2.0. While mostly focused on open formats, personal ownership of attention and other metadata (and ownership of data you can move from service to service at will), little time was spent on bandwidth and internet infrastructure (disclaimer: I’ve been involved peripherally advising JD on proposals surrounding open infrastructure and leveraging current for-profit company offerings).

I fear that the server in the basement to start and we’ll add racks ‘o servers later mentality will guarantee that the user experience with Web 2.0 applications will start off enjoyable and quickly turn in to a negative experience…and potentially kill the acceleration of this next phase of the Web.