Long time readers will know that I have been reporting on the Semantic Web for many years - since June of 2005, in fact, when I dedicated an issue of my eJournal to The Future of the Web. The long interview I included with Tim Berners-Lee remains one of the most-read articles on this site of all time. Ever since then, I've periodically given an "attaboy" to the Semantic Web. And guess what? It's that time again.
Why? Because the more the Web is capable of doing, the more we can get out of it. And given how much we now rely on the Internet and the Web, we can't afford to allow either to be less than they are capable of being.
Have you discovered The Alexandria Project?
Five years ago I dedicated an issue of Standards Today (then called the Consortium Standards Bulletin) to the future of the Semantic Web. The centerpiece was a very detailed interview (over 5,700 words) with the inventor of both the Web and the Semantic Web, Tim Berners-Lee.
That issue had two foci: the importance of Berners-Lee’s vision of the Semantic Web becoming a reality, and the very substantial impediments to that happening. In my interview, I returned again and again to the latter issue.
What were those impediments? Back in June of 2005, simply understanding what the Semantic Web was all about was a real problem; proponents found it hard to articulate its operations and uses in a way that people could get their minds around. More seriously, though, was the amount of effort that implementing the W3C’s core Semantic Web standards would take, conjoined with the absence of clear examples of what kind of rewards would follow for those that took up this burden. In effect, there was not only a chicken and egg issue, but an absence of people interested in buying either the bird or the egg.
Back in December of last year, Google posted a brief announcement of a new experiment in online publishing. At first blush it seemed to represent a challenge to the Wikipedia - but with a few differences. Google summarized the concept as follows:
Earlier this week, we started inviting a selected group of people to try a new, free tool that we are calling "knol", which stands for a unit of knowledge. Our goal is to encourage people who know a particular subject to write an authoritative article about it. The tool is still in development and this is just the first phase of testing. For now, using it is by invitation only. But we wanted to share with everyone the basic premises and goals behind this project.
Then the project dropped out of sight, while the chosen authors contributed initial content, and while Google decided whether to green light the project for ongoing support and public participation
This Wednesday, Google lifted the password curtain on its infant knol site, and issued a new announcement. In some respects, the description of the knol game plan (and even the words) are identical to what we read in the original blog entry. In others, they are different, apparently reflecting lessons learned and author feedback received during the intervening seven months. And, of course, there is now the nascent site itself to browse and watch evolve as well.
What you'll see when you visit turns out to be quite different from the Wikipedia - at least for now. Given that until yesterday it was neither available to public feedback nor open to volunteer authors desiring to launch their own knols, what it looks like today will almost certainly be very different from what it looks like a year from now, or perhaps even in month from now. With the door now open to anyone that wants to walk in and the freedom to go in any direction - not to mention ad revenues to be reaped and shared with authors - what we will see will be akin to the Apple App Store for the everyman and woman - not just for developers, but for anyone who can type.
Here's my take on what to expect, how the knol experiment may evolve, and why I think it matters - a lot.
Microsoft has made many acquisitions for many reasons over its history - 122 to date, according to the list maintained at the Wikipedia. Almost 100 of these have been consummated in the last decade, as the company that triumphed in operating system and office productivity software has sought (often unsuccessfully) to achieve similar success in other domains. Other purchases have demonstrated pragmatic "build versus buy" decisions, serving to add functionalities to products that needed them more quickly and efficiently than in house efforts could achieve.
In its earlier days, Microsoft was much more likely to mimic the products of other companies rather than buy them, in part reflecting its engineering-driven culture, and in part its hardball approach to competition. When it did add features this way, it invariably added them for free into its existing products to make them more desirable. The result was often to drive the originators of those features out of the marketplace, since who would buy what they could get for free? Sometimes, the motivation was more desperate, as with the crash development, and bundling, of Internet Explorer in Window, when Netscape threatened to open a critical breach in Microsoft's control personal computing.
If that sounds vaguely familiar, it should, since Google is following the same course, albeit in a kinder, gentler way, as it adds service upon service, all for free, and all in the service of racking up more and more ad revenues. That's disturbing, because when your goal is ad revenues and not great technology, you may not necessarily produce great technology. But as Google's dominance continues to grow, who will be able to credibly compete against it in those technologies, to ensure that innovation continues?
Or so, at least, Google would like you to conclude. Significant differences include single-author control (but the freedom for other authors to set up competing pages as well), bylines for page authors, reader ranking, and - oh yes - Google ads (authors interested in allowing ad placements would get a "substantial" share of the resulting revenues).
Here's how Google introduces the concept on a somewhat higher level:
The web contains an enormous amount of information, and Google has helped to make that information more easily accessible by providing pretty good search facilities. But not everything is written nor is everything well organized to make it easily discoverable. There are millions of people who possess useful knowledge that they would love to share, and there are billions of people who can benefit from it. We believe that many do not share that knowledge today simply because it is not easy enough to do that. The challenge posed to us by Larry, Sergey and Eric was to find a way to help people share their knowledge. This is our main goal.
The question, of course, is how well the Knol project will compete with the Wikipedia, both for author input as well as readers.
And, of course, in quality. The Google announcement of the Knol project is pasted in below in full, but I'll also provide my review and reactions to the concept, not only on the knol's prospects for fulfilling Google's goals, but also its potential for providing a well supported, highly visible, testbed for individuals to experiment with a wide variety of models for the collaborative creation of Web based content - something the Wikipedia does not offer.
The W3C announced the launch of an intriguing new "Incubator Activity" earlier this week that should test the limits to which XML, the lingua franca of all things IT, can be put. The new initiative is called the "Emotion Incubator Group," and its purpose is to take us beyond the narrow range of the emoticon. According to the group's Charter:
The mission of the Emotion Incubator Group, part of the Incubator Activity, is to investigate the prospects of defining a general-purpose Emotion annotation and representation language, which should be usable in a large variety of technological contexts where emotions need to be represented.
That statement also illustrates the range of ways in which those at the cutting edge of standards development are trying to enrich the potential for human-IT interaction, even as they seek to increase the effectiveness of computer-to-computer innovation through Semantic Web standards.
What would an "emotion annotation and representation language" be all about, and is the creation of such a language actually practical? Let's have a look.
A report by a U.S. Department of Commerce Task Force concludes that transitioning the Internet from IPv4 to IPv6 will be much like all too many other IT upgrades Ã¢â‚“ long, tedious, expensive, doubtful in their ultimate benefits Ã¢â‚“ and ultimately unavoidable.
The profile of the Semantic Web continues to rise with an increasing number of interesting and diverse articles in the press and on line. Here are some more.
It seems that this is the week that the Ontologists and the Anarcho-Populists are taking to the streets to debate the One True Way to the Next Generation of the Web.
Last time around, DARPA had a clean slate to work with when it commissioned the Internet. Building the Next Generation of the Internet will be like a design competition to renovate a building that's in use, with a Zoning Board to satisfy, a hostile Neighborhood Association, and who knows what else.