WWW versions

There is so much written, discussed, and argued about the web versions all over the blogosphere – web 1.0, 2.0, 3.0 and so forth. There are at least two schools of thought regarding the nomenclature. One (the puritans) that accurately defines the version, presents strict guidelines for a website to belong to a certain version and lays down well defined frontiers within which each version resides. The other takes a less pedagogic view, dismisses these versions as buzz-words, and questions whether these can be used in any meaningful way. You can argue ad infinitum about the rationale of this nomenclature but you cannot ignore it completely. If anything, the versions help us nail down those attributes that make a website attractive in all respects – aesthetically, functionally, and above all economically.

Well, to be precise these are the versions of the world wide web (hereto forth referred to as www) as there is a difference between the internet and the www. Here is an assimilation of the past, present and future state of the www using the versioning system mainly because of its intuitive and chronological appeal.

Web 2.0 conference

Here is a little bit of history to get us started. We cannot talk about www versions without mentioning the Web 2.0 conference held in 2004. Here leaders of innovative startups and internet industry titans came together to discuss the theme – “the web as a platform”. This was the first ever second generation internet business conference. The term web 2.0 gained wider popularity and acceptance as a result of this conference and web 1.0 is a retronym referring to the state of the www maybe before the internet meltdown of 2000-01.

Web 1.0

Lets start at the beginning – the seminal ARPANET of 1969 (the famous year when people were singing “Hello Mrs Robinson” and cheering Apollo 11 on its way to the moon) evolved into the internet as we know it. The www not only gained wide acceptance but also became extremely popular. People discovered new ways of leveraging its appeal. The first movement – Web 1.0 – was to move knowledge, information, data, etc online. In other words, a lot of content was being brought online. There was commerce also – but again it was like taking a static catalog and publishing it online. Then people could use a shopping cart  as they would in real life to pile items and pay for it online. There were search engines too like yahoo (yes, it was one of the first and earliest), altavista, opentext, infoseek, lycos, excite, hotbot, askjeeves just to name a few! There were also some budding communities like geocities, icq, slashdot, among others. Geocities as it originally existed was a webhosting service that let users create content but also had community characteristics like bulletin boards, and chatrooms. Icq was an internet wide instant messaging website. Slashdot was the precursor to present day blogging.

I came across a harmless and hilarious rendition of a web 1.0 website and I am sure it is a joke. If you want to get a taste of it, you can get it here! This was the past and now lets move on to the present – web 2.0. Web 2.0 seems to be all about communication – networking (if I may say so) amongst humans and computers.

Web 2.0

Lets investigate the human angle of Web 2.0 first. Social networking sites are immensely popular as they make it so easy to interact, connect, and communicate. Weblogs (blogs) have revolutionized conventional journalism. Before the advent of blogs, an army of journalists covered events, voiced opinions, and chronicled history as it happened. Now you don’t have to be a journalist to get the right to write history!! Ok, I got a little carried away with my affinity for assonance here, but what I mean is that the layperson can now publish online about current events, or about events that are going to happen (and get sued if you are the blogger spilling the beans about Apple’s latest product launch). I think blogging is the most potent online revolution that has happened in the recent years. Wikis are another great product of human interaction that leverages such interaction to produce the greatest encyclopedia ever and that too so quickly and so efficiently. Above all its free to the public!

Now the technology angle of Web 2.0. The web Application Programming Interfaces (APIs) and web services are the means by which computer-to-computer (C-to-C) interaction becomes possible. APIs are just programming interfaces that lets a web application interact with a site or service – like the Digg API that lets web developers retrieve Digg data that can be used by their sites. RSS is the other tool for C-to-C interaction. This is a multicronym (if I am allowed to coin a new term here) that has many long-forms like Really Simple Syndication, or Rich Site Summary, or RDF Site Summary. The pictures that you see on the side bar of this page are just an RSS feed from my Flickr site. If you are a content provider (blogger, for instance), then syndicating your content to RSS aggregators (like My Yahoo which is also a great example of personalization), is not only a great way to reaching a wider audience but also an efficient means of search engine optimization (SEO). For instance, you are Joe the plumber and own a plumbing business. If you have a dynamic website that offers visitors bookmark tools for StumbleUpon, broadcasts RSS feeds (maybe articles about benefits of repiping or bathroom fittings product reviews), offers comment threads to readers for asking questions or posting comments, categorizes and archives content by tags/categories, then you can be sure that a Yellow pages site won’t rank above yours in the search results served up by an internet search engine.

Here is a great directory of cool Web 2.0 sites.

Web 3.0

Moving on to Web 3.0. Here things get interesting as Web 3.0 seems to be in the budding stages now so we can safely say it is the version of the future.  I have to admit that it was Sramana Mitra’s blog that got me first intrigued by the www versions. It is an excellent way to classify Web 3.0 and it has a lot of intuitive and conceptual appeal and makes for a very compelling read. She says that the recipe for web 3.0 consists of content, commerce, community and introduces a fourth “C” – context. It is context that would drive Web 3.0 in addition to vertical search (a great example is the travel site kayak) and personalization. Personally, I feel the semantic web in which computers become more human like and are able to analyze data and instructions (not only low level readable by computers but also high level semantic understood by humans) is the version of the future. This may not be as hard to achieve what with artificially intelligent agents already in existence.

While the concept of making computers more human-like does evoke disturbing images of machines taking over the world as depicted in innumerable sci-fi flicks like Terminator, it is not too hard to imagine that it will take decades if not centuries for this to happen. So instead of a single Web 3.0 version there may be several versions in the interim. Before I go into a new way of keeping track of what these versions would entail, let me say a little more about personalization.

Amazon shopping experience

Amazon is a great example of a website that incorporates commerce, community, and personalization among other things. We will focus on personalization. Most of you would be familiar to the amazon shopping experience. Lets say you are looking for books on enterprise risk management. You sign in and you search for risk management books. You find one that looks interesting and you click on its link. Then you are presented with the book price, details (it lets you look inside the book at the table of contents). If you are unsure of what you want, it offers invaluable options, like other books on the same topic, or what other people are buying or end up buying eventually (community). If you are like me, you want to read reviews before you buy anything and amazon offers just that – invaluable previous customer reviews of the product, and discussion threads that may eventually influence your decision one way or the other (community). The best thing is that the reviews are rated as being helpful by other readers and amazon displays these ratings which again is invaluable because you don’t want to be swayed by a vested interest but rather want a good unbiased review. If you just go to the homepage, amazon lists the New for you and the Latest from authors you may like sections, which have newly published books that you may like (personalization).  This is remarkable as you don’t need to explicitly indicate your preference. Like a human, it figures them out based on your past browsing or buying history. I also think that this personalization, rather the degree of personalization may well become the preferred benchmark against which the future versions will be evaluated.

Degree of personalization and future www versions

Let me expound upon what I mean here. When I searched for the term www versions on Google, its search engine returned about 116 million search results – try it. Now what I don’t understand is – Why is google so proud of the fact that it found 116 million in 0.17 seconds?! All I asked for was one page that had the information I need. Ok – granted the search term was a little generic but how many of us actually look at the next page (beyond the first 10 results) of any search we initiate unless we are desperate for the information and the first ten results just don’t serve up what we were looking for? Will any one of us ever go through all the 116 million results?!! It is inconceivable even to think of it!! I would hope that this would change as the degree of personalization increases. As your artificially intelligent personal Google search agent gets to know you better after learning about your personal preferences, habits, dispositions, needs, etc maybe able to cut down on the irrelevant results of your search. So with every www version it gets better and serves you fewer and fewer pages but these are more relevant and you don’t have to waste time sifting through the irrelevant information overload. Until your search agent knows you completely, maybe even better than you know yourself, at which time it will need to present only one page with exactly what you need – this page may not exist but it may have been customized by the agent to get you exactly what you need. At this point you would have lost all your privacy – but isn’t that a small price to pay for the perfect search engine?! Here is a graph that charts the course of the future versions.

Google Search Results vs www versions

As the www versions progress, the search results produced by google, or any search engine for that matter, will tend to one.

Explore posts in the same categories: Technology

Tags: , , , ,

You can comment below, or link to this permanent URL from your own site.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: