Scaling Web 2.0: The Dirty Little Secret Exposed?
Was very pleased to see Tim O’Reilly bringing forth the issue of Web 2.0 scaling and Ray Ozzie’s perspective. This is such a vitally important issue and it needs analysis, facts and discussion and big time thought leading exposure.
I first wrote about the “dirty little secret” of Web 2.0 back in December of 2005. That secret is that infrastructure, bandwidth and minimizing latency is a huge issue for startups and is one little discussed. It’s one I know first hand from a conferencing startup I worked with last year — and informing developers is an imperative since this dirty little secret will impact rich, internet applications; mashups; widgets; and other composite applications delivered going forward.
This problem becomes more acute as we all pull data from geographically disbursed hosted online services. I can’t tell you how many times I’ve waited…and waited…and waited….for some data to appear in a widget, an ad served from DoubleClick, or a startpage pulling simple RSS text data from dozens of different sources. Imagine when several, dozens or numerous interdependent sources (ones that pull data from other services to deliver a composite web service that is, in turn, consumed by yet another new application!). It’s a recipe for disaster unless managed at a world-class level.
Now that more of us are playing with video, Flash and, especially, streaming video (e.g., uStream and like what I did at a low level yesterday with Skype video), the challenges in betting a business, a workshop series, a product category or composite applications means that we all better get more informed about this issue and damn fast.
I’ve said before that one key to the dotcom crash was HUGE amounts of content and functionality being shoved into the top of the funnel while those of us consuming it were drinking from the tiny end of the funnel through 56kbps straws.
I fear that unless this dirty little secret is handled and done so by disseminating understanding amongst ALL creators, developers, business strategists and users of Web/Enterprise 2.0 products and services, users expectations are going to be dashed and it will create material barriers to adoption and use. Maybe not another crash, but the barriers and obstacles that will come are preventable with enhanced understanding and knowledge dissemination.
I’ve talked with the folks at O’Reilly about how great it would be to have thought leaders in Internet infrastructure give the development community the facts about bandwidth, internet infrastructure and how to minimize or eliminate latency.
Case in point: a few months ago I had a long conversation with Dave Abbott, the CTO of Akamai‘s customer #99: Internet Broadcasting Systems. These guys drive over 70 TV web sites of major broadcast affiliates as well as the content feeds for NBC for the Olympics. Dave gave me a comprehensive overview of what they drive through Akamai and how they scale. It’s impressive and the amount of content moved around is stunning and they couldn’t exist without Akamai handling and managing the enormous volumes of content.
I understand the difference between content delivery network (CDN) providing (e.g., Akamai, Limelight, CacheFly) and application (transactional) data delivery which is at the heart of the dirty little secret. They’re very different animals and yet Akamai has taken steps to deliver applications at the edge through their globally distributed server network. What I DON’T know is what these CDN providers might be doing to provide scaling services to Web 2.0 and Enterprise 2.0 providers; what the peer-to-peer solutions might be; and how to advise my clients, friends and others in the Internet space.
I was also curious about what Amazon Web Services (AWS) might be doing (and I talked to Jeff Bezos from Amazon at Etech and he suggested I chat with his CTO for more info on this issue and how AWS is addressing it) since they’ve hit the sweet spot on computing, storage and massive scaling. Still, I’m quite ill informed about how AWS can scale globally and how they’re going about minimizing or eliminating latency.
When we read about Google’s server farms, Sun’s I.T. containers and now Microsoft’s management of globally distributed data, you know that the thought leadership and understanding exists and we could be guided by those who understand it, mitigate their own risks, without having them reveal too much or cough up their competitive advantage.
1 Comment
Leave a Comment
About Steve Borsch
Strategist. Learner. Idea Guy. Salesman. Connector of Dots. Friend. Husband & Dad. CEO. Janitor. More here.
Connecting the Dots Podcast
Podcasting hit the mainstream in July of 2005 when Apple added podcast show support within iTunes. I'd seen this coming so started podcasting in May of 2005 and kept going until August of 2007. Unfortunately was never 'discovered' by national broadcasters, but made a delightfully large number of connections with people all over the world because of these shows. Click here to view the archive of my podcast posts.
Good read and interesting topic. Akamai has a white paper on how they accelerate web 2.0 apps.
http://www.akamai.com/html/forms/web20.html