Consortium Standards Bulletin- June 2005
Untitled Document
Home About This Site Standards News Consortium Standards Journal MetaLibrary The Standards Blog Consider This The Essential Guide to Standard Setting and Consortia Sitemap Consortium and Standards List Bookstore
   Home > Consortium Standards Bulletin > June 2005
  Untitled Document
Untitled Document


June 2005
Vol IV, No. 6


  Print this article (PDF)
Tim Berners-Lee brought the Web to the world a decade ago, and we will be forever in his debt. Now he’s trying to give us an upgrade he calls the Semantic Web. Just like the first time, people are having trouble getting it. But if we don’t try harder to “get it,” then we might not get it at all. (Get it?) Print this article (PDF)


In this exclusive interview, Tim Berners-Lee explains what the Semantic Web can do for you, what you can do for the Semantic Web, what challenges lie ahead, and why it all matters Print this article


Throughout history, new discoveries that have improved how we can learn, share, archive, integrate and/or reacquire information, discoveries and ideas have been followed by the rapid advancement of knowledge. The development of communities on the Web allows all five of these steps to be compressed into one process that might be called “SuperIntegration/Creation” – a process that will enable the next great leap forward in the acquisition of knowledge. Print this article (PDF)

In March, Consortium leaders met with their counterparts from the accredited standards community to discuss the new United States Standards Strategy. Now ANSI is hoping that consortia active in security standard setting will participate in a Homeland Security initiative.


Standards tools come in many forms -- including a bewildering (and sometimes ominous) array of reference materials you can order from NIST. So, you want some Plutonium-242 with that Human Lung Powder? Print this article (PDF)  
EC Brings an Unusual Anti-Competition Investigation Against ETSI; XML Everywhere; IBM Launches Open Hardware Initiatives; Are Wireless Standards a “Complete and Utter Mess?” Will the ITU Succeed in Canning ICANN? Dramatic Initiative Launched to Bridge the Digital Divide; California Bans RFID on State-Issued ID Cards; Steroidal DSL Standard Approved by ITU; Patent Reform Bill Filed in Congress; Major Multinationals Vie to be “More Open Than Thou;” Rambus Sues Largest Customer; and, as always, much more

Print this issue (PDF)



Since the advent of this journal we have made it our goal to provide useful information to our readers. At times, we have also tried to bring what we believe to be important points of view regarding standards to the attention of readers, hoping to legitimate topics of discussion, or to shed light on events, issues or trends that we believed were under-reported in the media (or had not been reported at all).

With this issue, we are addressing a topic which we believe to be important not just to those who are involved in standards, but quite simply to everyone in the world: the future of the Web.

The Internet and the Web are already the basis upon which almost everything directly or indirectly is accessed and managed. Their importance to human existence, Third World advancement, and almost every facet of life can be expected not to level off in the future, but to expand exponentially.

Today, we have an opportunity to take the Web portion of this invaluable resource a significant step forward by implementing the Semantic Web standards that have been in the making for some years within the W3C. If these standards are widely used to encode, locate and utilize data, then the Web will become a richer and more fruitful resource upon which the world may rely. If we do not, then other, less structured efforts directed towards enhancing our use of the Web will certainly proceed, but a major opportunity will have been lost.

This issue therefore has a cause: to support the efforts of Tim Berners-Lee and the W3C to promote the implementation of the Semantic Web. In doing so, we have not tried to ignore the practical or architectural concerns that have been raised by some regarding the design of the Semantic Web or the prospects for its broad adoption. In fact, we interviewed a variety of knowledgeable sources (CEOs of some of the best-known consortia, corporate standards directors, consortia technical directors, and those involved in product development), and without exception all expressed at least some reservations regarding how successful the Semantic Web will be. But significantly, the most common concern was not whether the Semantic Web could “work”, but whether the effort to make it exist would be undertaken, absent a clearer value proposition for those that would need to provide the time, effort and cash to bring it into being.

Having the Semantic Web, then, is a bit like not having global warming: few may get rich, many will pay, but ultimately all will benefit. It is in this spirit that we support the Semantic Web, and the reason why we urge you to do so as well.

Further to that goal, in our Editorial we review the concerns that have been expressed regarding the build-out of the Semantic Web and the creation of the tools needed to make use of it, and conclude that the Semantic Web will happen – not (initially) from the top down, but from the bottom up, as a critical mass of work is done through the type of spirited innovation, experimentation and conviction that has typified the work of the open source community.

In our Feature Story, we are honored to present a comprehensive, unedited, exclusive interview with Tim Berners-Lee, the creator of the Web and the visionary behind the Semantic Web as well. In this interview, Tim explains why he believes the Semantic Web must happen – and why he thinks it will.

In our Trends Story, we describe how the forward progress of human knowledge has accelerated dramatically each time that advancements have been made in our ability to learn, share, archive and integrate the information, discoveries and ideas of individuals. We then demonstrate how the continuing development of the Web will result in a new explosive phase of growth in human knowledge and potential. In so doing, we underline the stakes that ride on supporting the deployment of semantic technology in order to make the Web as valuable and useful in our own lifetimes as we are capable of making it.

In the months ahead, we will add new features to to help you follow the further development and advancement of the Semantic Web.

Finally, to close on a lighter note, this month’s Blog entry reports on the role of government in providing standards-based reference materials to society – and on some of the surprising materials you can buy.

As always, we hope you enjoy this issue.
    Best Regards,
  Andrew Updegrove
  Editor and Publisher



Andrew Updegrove

Anyone who has ever had the pleasure of hearing Tim Berners-Lee speak knows that he is not the most patient of men. The words tumble out in a profusion of enthusiastic conviction, propelled by the type of urgency that comes from seeing the future, and wanting to make it exist today. It is thus ironic that the path that he is forced to walk to achieve that goal is one of the most tedious imaginable: standard setting. Not only is it a lengthy process, but one that is based upon consensus, requiring not just great labor, but a constant process of cajoling, inspiring and simply charging into the face of battle, challenging others to follow by sheer force of example.

If the rigor of the technical process was the only impediment to success, it would be challenge enough. But standards must not only be created to be worthwhile – they must be adopted and implemented as well. Thus, a second burdensome responsibility must be assumed by the champions of any new standard, which is to convince those for whom a standard has been created to not only accept the gift, but (figuratively speaking) to also prominently display it on the mantelpiece, rather than relegate it to the dark corners of the attic.

Never has this challenge been greater for Tim Berners-Lee than it is now, as he seeks to persuade the world that it is time to embrace his vision of the Semantic Web. To Berners-Lee, it is not so much a matter of convincing users to adopt a new type of Web, but of accepting a feature that was planned, but not included, in Web Version 1.0. But to much of the rest of the world, the Semantic Web is a difficult to understand abstraction, the utility of which is not universally grasped.

Perhaps worst of all, while those in industry generally accept the usefulness of the Semantic Web concept, they are not yet convinced that committing their resources to supporting it will provide greater economic gains than directing the same resources to other purposes.

For others, the question is not whether the Semantic Web is a good idea per se, but whether the technical framework conceived by Berners-Lee and his staff at the W3C is the right one. Among some, the debate centers over whether it is too comprehensive or too limited? Too inflexible or not rigid enough? Is it insensitive to cultural values? Is it an infeasible concept that is doomed to be ignored? Does it have an insoluble “chicken and egg” problem that can only solved by browser developers buying in?

The views on the Semantic Web expressed today as the initial enabling standards work approaches completion seem almost as numerous as those that hold an opinion. And it must be confessed that in researching this issue with standards professionals in both the consortium as well as the corporate worlds, those that expressed pessimism for the eventual, pervasive implementation of the Semantic Web outnumbered the optimists.

The result is that Berners-Lee has found it necessary to spend an increasing amount of his time barnstorming the world to garner support for the Semantic Web. Given that the Semantic Web activity was launched in 2001 (a precursor project, the Metadata Activity, dates back to 1998), it would try the patience of a saint, much less someone with the energy level of Berners-Lee, to face the need to hit the stump after so many years of arduous conceptual and technical work. And yet, he is patient – reminding the world that this is how it happened before, with people scratching their heads and doubting, but ultimately “getting it” and climbing on board.

The difficult watershed of opinion that Berners-Lee straddles today may be summarized by two catch phrases that captured popular attention more than twenty years apart, each of which sought to convey a sense of the unimaginable.

The first was coined in the 1960’s, when protestors of the Vietnam war challenged society to imagine a world where the concept of armed conflict had been rejected, and where the existence of war had therefore been rendered impossible. That smirking but memorable phrase was, “What if you gave a war, and nobody came?”

The second slogan was popularized by the 1989 hit movie, “Field of Dreams,” in which the central character (Kevin Costner) is haunted by a voice, heard only by him, that intones, “If you build it, they will come.” Against the disbelief of family, friends and strangers, Costner’s character invests all he has in a seemingly romantic and hopeless quest, but nonetheless prevails and is victorious.

So which of these metaphors will apply to the labors of Berners-Lee, and the hundreds that have worked with him to create the standards they hope will be used to create the Semantic Web? Will it be, “What if you gave a Semantic Web, and nobody came?” Or, now that the core standards have been built, will a world of adopters play upon the field that has been prepared for them?

We believe that the latter will (eventually) prove to be the right metaphor, but for a reason that is common to both phrases.

Today, there are uncounted millions of technically adept, imaginative, and motivated individuals that are as comfortable with code, architectural concepts and new ideas as their grandparents were with home improvement projects, hand tools, and tried-and-true methodologies. Today’s generation is eager to experiment with virtually every new information technology capability that is been made available to it, resulting in myriad, and often surprising results.

One need look no farther than phenomena such as open source software and music download file sharing to appreciate the potential for the viral, peer to peer distributed uptake of new concepts. And the same examples demonstrate that the multinational corporations that had no interest in these concepts at the outset can become enthusiastic adopters when the potential for profiting from the same innovations becomes demonstrable.

Today, the attitude of most major corporations towards the Semantic Web is more supportive in principal than evidenced by actual product plans. True, the W3C Web site lists multiple statements of support , but the absence of some major corporations is as conspicuous as is the presence of those that are included.

Consider also the results of the following Google searches on three new areas of business opportunity, each of which is enabled by standards that are at roughly the same state of development vis-à-vis their suitability as a basis for productization:

“Web Services”
“Semantic Web”
Search performed on June 17, 2005

In the case of Web services, the search results reflect the fact that many of the biggest IT vendors believe that great returns on investment may result from the development and deployment of Web services-based products for the supply chain – more of a “Killer Application Platform” than a single “Killer App”, but equally effective. In consequence, they are placing huge bets on Web services, resulting in significantly more attention being paid by both the technical as well as the business and financial press to the technology – over two thousand press releases and other stories in the preceding month.

The example of RFID is even more telling. Prior to the announcements by Wal-Mart and the United States Department of Defense that each would require RFID tagging from large numbers of vendors, the deployment of RFID technology was more in the “if” than the “when” category. Now, RFID is generating news stories at a rate comparable to Web services, riding on the credibility that these major adoption notices bestowed. In the case of RFID, therefore, it was not a “Killer App” that turned the tide, but a “Killer Customer.” The announcements by Wal-Mart, and then the DoD, persuaded additional industry players that a critical mass of buyers and vendors would indeed enter the field. As a result, the prices of RFID tags would fall, and profits would eventually (if not immediately) reward those that decided to invest in providing RFID-based products and services.

Does this mean that the Semantic Web is doomed to die before it is born, or that announcements comparable to those that launched RFID as a credible technology investment must be obtained before progress can be made?

While many think that the answer to that question is “yes”, we disagree. We believe that the Semantic Web will come into being (albeit gradually), simply because it can.

One reason for this prediction is that while a “Killer App” is hugely useful to legitimate and drive adoption of a new technology, better mousetraps have been a big business from time immemorial. 28.8k modems did not enable more applications than their 14.4k predecessors, nor did nominally 56k modems create dramatic new product opportunities over 28.8k modems. And yet they were developed, and the quest for broader bandwidth still continues at a fevered pace. Even if the Googles and the Yahoos of today do not display current interest in Semantic searching, others will see the potential for agents that search the Web, for cataloguing existing data in databases for Semantic purposes in intranets, and so on.

Eventually, there will be competitors to Google and Yahoo, because the profit potential for search engine-based advertising is so great. When this happens, the attitudes of the then-dominant browsing service providers will doubtless change. And, given Google’s penchant for rolling out new, secretly developed Beta services, do we really know today that they are on the sidelines after all?

One of the wonders that the Internet has made possible is that everyone can participate in the process of conception, creation and enjoyment of new technology and the fruits of that technology (assuming that the technology is not encumbered, on which more below). As a result, it is no longer necessary for huge corporations to commit to a given technical approach in order for that approach to become widely implemented. RSS syndication is an excellent example of a useful technique that arose not from a corporation, or even from a standards organization, but which nonetheless has become pervasive. Nor is it now usually necessary for new types of infrastructure to be created at great cost, as historically often constrained innovation in the case of areas such as telecommunications.

Today, with over a billion connected individuals, any good idea, and any robust tool, can be taken up and utilized if it becomes known, and if there are no impediments to its use. Tim Berners-Lee and the other true believers at the W3C have taken care of that last element as well, with the hard-won adoption of a Patent Policy in May of 2003 that makes it as difficult as possible for any standard adopted by the W3C to run afoul of a blocking patent or the requirement to pay a licensing fee.

Already, the evidence is that grass-roots interest is building for the Semantic Web, just as occurred with the Web before it. Today, a wide variety of communities, both ad-hoc, academic, and in some cases sectoral (e.g., life sciences) have begun to explore the potential of Semantic techniques to address real problems and attain defined goals. As happened ten years ago, individuals, and individual communities, are realizing possibilities and opportunities that can be enabled using the standards, tutorials, white papers, and other materials that the W3C has generated. These activities will be organic, opportunistic, and rooted in achieving real goals.

And yes, there are examples of utilization of Semantic Web standards by some corporations as well, such as the Semantic Web tools included by IBM in its alphaWorks open source offerings, and the capabilities built by Adobe into its Creative Suite software, which will automatically include Semantic Web descriptions of all files that are created by Adobe applications.

But what of the critics that say that the Semantic Web structure is too rigid, or too flexible; too neutral and not cultural? Are they totally wrong?

Yes and no. Yes, in that the Semantic Web, just like the Web before it, is a best-guess charting of a voyage into the unknown. Those that opt to build out the next generation of the Web will go where they will, and the standards that enable the creation of that Web will evolve to match the places where these pioneers choose to travel, and be optimized to achieve the tasks that they undertake. As with all standards, there will be things that can be done, and things that (initially) can’t be done. And then there will be refinements, and new standards, to enable doing those things, too.

But the important thing is that voyages into the Semantic Web have now been made possible, and that they have begun. Ultimately, as with the original Web before and open source more recently, the bigger players will come on board when it appears to be in their best interests to do so. There is nothing wrong, and perhaps even much to gain, by the Semantic Web becoming real in that order.

So perhaps the critics will prove to be right, in that the Semantic Web may look different in ten years than it was envisioned by Tim Berners-Lee ten years ago, or as it has been enabled by the work of the W3C today. We believe that the detractors will be proven wrong when they contend that the Semantic Web will not happen, simply because the design of the standards system is not to their liking, or because no Killer App has yet been announced. And perhaps they will be right in that the Semantic Web may not prove to be as explosively utilized as is HTML. But, then again, it does not have to be so pervasive in order to be incredibly useful.

Still, Berners-Lee has a last, long, arduous lap to go. He has already conceived and shared the vision for the Semantic Web, provided the leadership to see the enabling standards become real, and fought the battle to ensure that those standards may be utilized by anyone without cost or troubling licensing restrictions. Now he is putting his reputation on the line to provide the credibility that will make it easier for those who are working on Semantic Web tools, techniques and encoding to do their work, so that the rest of us may eventually come to believe in his vision as well.

We believe that we are living in a time of democratization of technology that will result in an explosion of innovation, and that may well see a reordering of the forces of technological evolution. Rather than living in a world regulated by “top down” vendor-driven decisions on products, architectures and services, a “bottom up,” neural, global, real-time, self-calibrating and adjusting process of collaborative innovation will become pervasive, perhaps out-competing the capabilities of corporate research and development, and the power of corporations to mandate from the beginning what technical and architectural outcomes will succeed in the end.

Instead, corporations may become the opportunistic beneficiaries of this free research and development, with the most savvy and open-minded enjoying the most commercial success. After all – which bottom line would look better – one that is burdened with the costs of research and development, market research, and missionary selling, or one that is driven by free technology and the production of products that the market has already shown it wishes to buy?

So it is that we conclude that, now that Berners-Lee has led the process of building the enabling standards for the Semantic Web, “they will come”. In fact, creative, open-minded and adventurous players are already taking their places on the field. We live in a world today where, when confronted with the potential for something as valuable as the Semantic Web to become possible, it makes as little sense to ask “could it be stopped?” as “will it happen?”

Comments? Email:

Copyright 2005 Andrew Updegrove




Andrew Updegrove


Introduction: If you are a reader of this journal, it is likely that you are aware that one of the main goals of the W3C is nearing fruition: the deployment of the core set of standards needed to enable the next level of the Web itself. These standards are the extension of the original vision of the inventor of the Web, Tim Berners-Lee.

After years of development, these specifications will enable users to search not only for documents that contain data, but also for the desired data itself, through “semantic” identification and location techniques. The result of implementing these standards will be the creation of a next-generation “Semantic Web.” This new Web will be capable of supporting software agents that are able not only to locate data, but also to “understand” it in ways that will allow computers to perform meaningful tasks with data automatically and on the fly that today must be done manually and episodically by computer users. Or, as summarized in a Scientific American article written by Berners-Lee, Jim Hendler and Ora Lassila in 2001, “ The Semantic Web is an extension of the current Web in which information is given well-defined meaning, better enabling computers and people to work in cooperation.”

All of which would lead one to assume that the deployment of these standards will be immediate and enthusiastic. Unfortunately, when one seeks to describe the Semantic Web in greater detail, it tends to become more nebulous rather than clearer to the uninitiated. Were the standards created to allow better search engines to be built? No, but search engine technology will benefit from incorporating Semantic Web standards. Is it a new type of Web? No, but it will allow more to be done with the Web we have. Will it enable new “Killer Apps?” No – it is intended to be a Killer App.1

The result is that Tim Berners-Lee, who conceived the Semantic Web and has championed the development of the standards and supporting materials (primers, test cases, references and overviews) that will make it possible, has found it necessary to proselytize on behalf of the Semantic Web, just as he did ten years ago when descriptions of the now-familiar Web were met with blank stares.

In support of those efforts, we are honored to present a detailed interview with Berners-Lee that focuses exclusively on the Semantic Web, and why he believes its realization is so important.

Questions and Answers: Our interview was intended to help those that do not yet have the Semantic Web in focus gain an understanding of what the Semantic Web will (and will not) be, what we can look forward to using it for, and how it is likely to become real. Our questions were divided into the following six categories:

  • Vision: Why build a Semantic Web now, rather than add other capabilities at this point in time, and what new capabilities will the Semantic Web have?
  • Status: Who is already committed to create the Semantic Web, and how do we get the rest on board?
  • Critics: What has worked well and what has not proceeded so smoothly in developing Semantic Web standards, and what are the things that critics don’t “get” about the Semantic Web?
  • Business Reality: What are the biggest challenges to bringing the Semantic Web into being?
  • Infrastructure: Will other standards be needed in the future to take full advantage of the Semantic Web, and who will develop them?
  • Users: What will it be like to use the Semantic Web?

Tim Berners-Lee’s answers to our questions are provided in their entirety, without editing for considerations of space or otherwise. (For further context, see this issue’s Editorial and Trends article.)

CSB: Before we get into specifics, what is it like bringing your vision to the world for the second time of what the Web can be, now that the design for “version 2.0” has been completed?

TBL: Let me start by saying our work in promoting rather than developing the Semantic Web technologies has been like deja vu all over again for me. Fifteen years ago, one of the hardest things to do was not to develop the initial version of HTTP, or to create a browser that was also an editor, or even to get approval for the purchase of the equipment (!). The difficult thing was to convince people that the Web was something they should adopt.

At CERN, the killer app that got us through the technical barriers (OS, HW, philosophy) was making the phone book available through the Web. In the outside world, beyond lab settings, what helped the Web break through were two simultaneous developments - that CERN was making the code available to anyone who would like it free of charge or other encumbrance, and that young developers were coming up with browser software, including multiple implementations that supported inline images. And so with the potential licensing barriers down and the relative ease of setting up a server, things took off. But imagine, if you can, online information systems before the Web, and what it was like to try to explain the whole idea of the Web to people.

Envisioning life in the Semantic Web is a similar proposition. Some people have said, "Why do I need the Semantic Web? I have Google!" Google is great for helping people find things, yes! But finding things more easily is not the same thing as using the Semantic Web. It's about creating things from data you've complied yourself, or combining it with volumes (think databases, not so much individual documents) of data from other sources to make new discoveries. It's about the ability to use and reuse vast volumes of data. Yes, Google can claim to index billions of pages, but given the format of those diverse pages, there may not be a whole lot more the search engine tool can reliably do. We're looking at applications that enable transformations, by being able to take large amounts of data and be able to run models on the fly - whether these are financial models for oil futures, discovering the synergies between biology and chemistry researchers in the Life Sciences, or getting the best price and service on a new pair of hiking boots.

I. Vision Questions

CSB: Did you consider other ways of taking the next step to evolve the Web besides the semantic approach? If so, what were they?

TBL: You mean, if we don't have data on the web, would there be other interesting things? Of course, but they wouldn't replace it. And we do have lots of other things happening. The Mobile Web Initiative -- just launched -- is about making it easy to make web sites which work on mobile devices. There's Web Services, about integration of programs across organizational and application boundaries.

It's a bit like asking, if we didn't have graphics, on the web, would we have other interesting things? Well, we would -- but not graphics. The Semantic Web gives you an especially powerful form of data integration. It does this by using URIs, and by connecting your raw data (in databases, XML documents, etc) to a model of the real things (like customers, products, etc.) which your business uses. Any system which does one without the other won't get the effect of allowing data from one application to be used in unexpected new ways by other applications. And any system which does the same thing but doesn't use the common standards isn't going to be compatible, and so isn't going to be part of it.

CSB: What are the limitations of the Semantic Web – what will it enable someone to do, and what will it not permit us to do?

TBL: The goal of the Semantic Web initiative is to create a universal medium for the exchange of data where data can be shared and processed by automated tools as well as by people. The Semantic Web is designed to smoothly interconnect personal information management, enterprise application integration, and the global sharing of commercial, scientific and cultural data. We are talking about data here, not human documents.

The Semantic Web is not about the meaning of English documents. It’s not about marking up existing HTML documents to let a computer understand what they say. It’s not about the artificial intelligence areas of machine learning or natural language understanding -- they use the word semantics with a different meaning.

It is about the data which currently is in relational databases, XML documents, spreadsheets, and proprietary format data files, and all of which would be useful to have access to as one huge database.

CSB: What sorts of technical or other constraints led you to adopt the particular semantic approach you adopted?

TBL: Building upon the Web architecture was an important technical constraint in the design and development of Semantic Web standards. The Semantic Web is about Web evolution, not revolution. Our focus on building the standards was to layer this work upon the existing Web infrastructure to create both a Web of Documents and Data. While this may be seen as a limitation, I believe it has been beneficial on the whole.

CSB: Some commentators, such as Kim Veltman in “Towards a Semantic Web for Culture” have been critical of the limited degree of cultural context that the current design of the Semantic Web could comprehend. Do you believe that it would be technically feasible in the future to accommodate such goals, and if so, would the Semantic Web be an appropriate platform from which to take such a next step?

TBL: The paper you refer to talks about the subtleties of the meaning of words in our natural languages, and how these change with evolving cultures. While an interesting study, it is not the domain of the Semantic Web.

CSB: As you look at the Semantic Web project now, some 8 years after its inception, are you encouraged or discouraged? Does it look to you today as if you will be able to accomplish less, as much, or more than you had originally envisioned?

TBL: The Semantic Web has a whole lot more to it than the original Web. Building something which will be a firm logical foundation for interoperating business systems and query systems and so on takes more work and has to be a lot more well defined than a simple jotting down of some HTML tags! However, we have the entire URI and HTTP infrastructure to build on, of course.

One can always wish things were further along, but in fact I think the progress has been great. We were asked to hold up the query and rules work because people didn't want to start on it until the ontology (OWL) work had finished, so for some we were in danger of going too fast. Now we have a good solid layer of RDF and OWL which allows systems to be described, and data to be exchanged. OWL turned out to be more powerful than I had expected (I had expected something more like RDFS++) and that is great. The query language I think will be a major step, as it will allow major databases to be exposed without one having to transfer the whole file. It will also provide a way of integrating across SQL and XQuery systems.

I'm disappointed that we haven't seen RDF used as an export format on random applications such as desktop and enterprise applications. This may be because the RDF/XML syntax is a little off-putting. It is an irony that the RDF model itself is simpler than that of XML, but it isn't evident when you encode it in the standard syntax. The informal N3 syntax provides a learning and more human-friendly on-ramp for export and import, and it may be that standardizing that would be a useful step. On the other hand, there is an ever-growing set of adapters from various formats to RDF.

I am very happy about the reception which the Semantic Web has had in specific areas where people "get it". The FOAF project, for example, has a great spirit, and is a quite decentralized web of information about people's business cards, CVs, and who knows who. The whole area of life sciences and healthcare has been hopping with excitement as work is done to take down the boundaries between different silos of information across the field. We had a very vibrant workshop in the area, and Semantic Web was the talk of the recent BIO-IT conference.

I think the hope for more true interactivity in terms of collaborative tools, particularly real-time collaborative tools, has yet to be realized – it’s something I had hoped for in the early days, and I am still hoping to see it happen.

CSB: Since this is your second time around designing the Web, what did you learn from taking the Web from concept to reality the first time that may help us anticipate how the Semantic Web will become real?

TBL: The Semantic Web idea -- that of having data as well as documents on the web -- has been around since the start of the web. It is just more complicated to do. Experience from the initial growth of the web of documents? Well, it was a very rigid exponential growth, which couldn't be slowed or hastened. Different people 'got it' in different years, and to them it’s seemed that the web had 'happened' all that year. It spread first among enthusiasts, and then among small subcommunities where one could get to critical mass with the momentum of a few champions. These communities (High Energy Physics for the WWW, possibly Life Sciences for Semantic Web) are full of people who have very big challenges to tackle, and are largely scientifically minded people who understand the new paradigm. These things may be very similar.

Where it is different is that there is attention from the press. We work under floodlights. Whereas the WWW took off in the hands of the converts, and others were left in blissful ignorance, the SW takes off with articles like this one, and people checking to see whether it is time for them to get involved. This has helped in some ways, hindered in others. We have to work hard to make sure that expectations are not overstated.

I think there were important landmarks in getting the Web broadly adopted. The fact that CERN would not impose onerous licensing conditions on the use of Web technologies cannot be overstated. I knew of companies – big companies – that forbade their employees to pick up our work until CERN made its declaration for free use. The W3C patent policy now makes the development of new standards much safer in this respect, and it is an important aspect of the Semantic Web that it be royalty free.

CSB: What would you like to see happen as the next step after the Semantic Web becomes a reality? Will the W3C be the place for that to happen?

TBL: The Web will continue to evolve and adapt and the Semantic Web is part of this evolution. As the Semantic Web becomes more pervasive, I expect new challenges will be addressed in terms of usability, accessibility along with the application of these technologies in a variety of new domains: mobile, scientific, cultural, etc.

Just as the big search engines, and the clever algorithms which drive them, could not be designed when the WWW was young, so there will be applications the need for which is only evident when we have a large scaled Semantic Web. These may involve the creation of new standards.

CSB: Who should be excited about the Semantic Web that is not perhaps realizing yet what it could mean to them?

TBL: Many large-scale benefits are, not surprisingly, evident for enterprise level applications. The benefits of being able to reuse and repurpose information inside the enterprise include both for savings and new discoveries. And of course, more usable data brings about a new wave of software development for data analysis, visualization, smart catalogues... not to mention new applications development.

The point of the Semantic Web is in the potential for new uses of data on the Web, much of which we haven't discovered yet.

II. Status Questions

CSB: The February 10, 2004 OWL/RFD press release stated: “ Today's announcement marks the emergence of the Semantic Web as a broad-based, commercial-grade platform for data on the Web. The deployment of these standards in commercial products and services signals the transition of Semantic Web technology from what was largely a research and advanced development project over the last five years, to more practical technology deployed in mass market tools that enables more flexible access to structured data on the Web.” h ttp:// Did that mean that you expected people to start encoding Webpages semantically from that point forward? Have they?

TBL: It’s not about people encoding web pages; it’s about applications generating machine-readable data on an entirely different scale. Were the Semantic Web to be enacted on a page-by-page basis in this era of fully functional databases and content management systems on the Web, we would never get there.

What is happening is that more applications – authoring tools, database technologies, and enterprise-level applications – are using the initial W3C Semantic Web standards for description (RDF) and ontologies (OWL).

CSB: If it’s too soon to expect people to start investing in the Semantic Web, what is the release schedule for the remaining standards needed to “base enable” the Semantic Web? In other words, when is it reasonable for site owners to start investing the time and effort to encode semantics into their webpages?

TBL: No time like the present. Getting data into Semantic Web-friendly formats is the very first step in the Semantic Web progression, but you correctly note that there are layers to go – for Rules, Query languages, logic and proof – that are part of that full stack. You can see a quick diagram at:

III. Critics

CSB: Not surprisingly for as complex and ambitious a project as this, there have been critics of the Semantic Web initiative. Which of their criticisms do you think are valid, and which invalid?

TBL: W3C has over 20 different Activities, all of which have Member support. The Semantic Web activity is one of them, and one of the few that gets the bulk of its operating costs from outside sources. (The well-regarded Web Accessibility Initiative has many outside sponsors from government and industry, and the new Mobile Web Initiative is building its budget on a separate fee structure.

One of the criticisms I hear most often is, "The Semantic Web doesn't do anything for me I can't do with XML". This is a typical response of someone who is very used to programming things in XML, and never has tried to integrate things across large expanses of an organization, at short notice, with no further programming. One IT professional who made that comment around four years ago, said a year ago words to the effect, "After spending three years organizing my XML until I had a heap of home-made programs to keep track of the relationships between different schemas, I suddenly realized why RDF had been designed. Now I used RDF and its all so simple -- but if I hadn't have had three years of XML hell, I wouldn't ever have understood."

Many of the criticisms of the Semantic Web seems (to me at least!) the result of not having understood the philosophy of how it works. A critical part, perhaps not obvious from the specs, is the way different communities of practice develop independently, bottom up, and then can connect link by link, like patches sewn together at the edges. So some criticize the Semantic Web for being a (clearly impossible) attempt to make a complete top-down ontology of everything.

Others criticize the Semantic Web because they think that everything in the whole Semantic Web will have to be consistent, which is of course impossible. In fact, the only things I need to be consistent are the bits of the Semantic Web I am using to solve my current problem.

The web-like nature of the Semantic Web sometimes comes under criticism. People want to treat it as a big XML document tree so that they can use XML tools on it, when in fact it is a web, not a tree. A semantic tree just doesn't scale, because each person would have their own view of where the root would have to be, and which way the sap should flow in each branch. Only webs can be merged together in arbitrary ways.

I think I agree with criticisms of the RDF/XML syntax that it isn't very easy to read. This raises the entry threshold. That's why we wrote N3 and the N3 tutorial, to get newcomers on board with the simplicity of the concepts, without the complexity of that serialization.

CSB: Does the Semantic Web have any enemies? If so, what are they doing to get in the way, and what is the strategy for dealing with this opposition? For example, will the Semantic Web provide opportunities to the major browsers, or will it threaten their hegemony? And can the Semantic Web succeed without them?

TBL: The SW is not at all a threat to existing browsers. Remember that this is adding something to the WWW, not replacing it. The existence of data on the web will not threaten the documents, music pictures, and so on... on the Web.

With the standardization and deployment of Semantic Web standards in various commercial products and services, a shift occurred from the perspective of many that this work was research to a recognition that this is a practical technology deployed in mass-market tools that enables more flexible access to structured data on the Web.

I do expect there to be a serious first mover advantage when it comes to being Semantic Web compatible in software products. Data handling software which does not plug into the RDF data bus will be at a serious disadvantage when customers start to protect themselves by demanding SW compatibility.

IV. Business Reality Questions

CSB: What is the biggest challenge facing realization of the Semantic Web? Is it possible that the standards will be created, but not implemented, or that content owners will never encode semantically in sufficient numbers to make the Semantic Web initiative successful?

TBL: We've done a great job at establishing sound foundations for description (RDF) and for ontologies (OWL). We've seen significant interest and uptake in the Life Sciences community for the power that the Semantic Web can bring, and have seen successful Semantic Web projects at the National Cancer Institute, as one example.

It is very important to realize that the Semantic Web does not require content owners to individually encode information! The vast bulk of data to be on the Semantic Web is already sitting in databases -- and files in proprietary data formats. Downloaded bank statements, weather and stock quote information, human resource information, geospatial data, maps, and so on...all that is needed to write an adapter to convert a particular format into RDF and all the content in that format is available.

CSB: People talk about a “Killer App” for the Semantic Web, and you rightly point out that the Semantic Web itself is the Killer App. Still, there has to be an incentive for people to encode semantically and create agents, so there seems to at least be a chicken and egg issue. Does a company like Google have to commit to semantic browsing before the Semantic Web takes off?

TBL: I think that for many companies it may be that the killer app is an intranet. Many of the early WWW servers were inside the firewalls. The valuable data is company-confidential, and it is much safer to experiment with new technology in private! One computer company had, I think, 100 web servers internally before it had a public one. Similarly now, pharmaceutical companies are experimenting internally, but the company data isn't all shared. This slows uptake, as the results are not there to be linked to by others. Similarly, when I do my personal finances using Semantic Web tools, I can export the rule files -- but not the data as an example!

Note that search engines for the traditional web of documents have the task of finding relevant items in a sea of documents in (some form of more or less broken) natural language, with links. The Semantic Web is very different. Search techniques for the Semantic Web are going to be very different: it may be that the value add will be made in different ways by systems roaming around and looking for patterns, or by performing some specific types of inference, or by indexing Semantic Web data in new interesting ways. It probably won't be eigenvector-based link analysis which drives the good hypertext search engines.

In a way, the search engines are making up, by special techniques, for the lack of machine-understandable semantics in the documents on the web.

CSB: If buy-in by the search engines is crucial, where do Google, Yahoo, MSN (or other candidates to fill this role) stand on the Semantic Web?

TBL: Of these, I know that Yahoo has implemented a Search feature based on Creative Commons' work, which uses RDF to describe licensing terms on Web content:

CSB: If the big browser companies do not come on board, what will be the value proposition that will drive semantic encoding?

TBL: The Semantic Web architecture does not involve HTML browsers as we know them. There is a new breed of generic Semantic Web browser, but they are more like unconstrained database viewing applications than hypertext browsers.

There are at least two Semantic Browser projects I know of at MIT alone.

SIMILE is a joint project conducted by the W3C, HP, MIT Libraries, and MIT CSAIL. SIMILE seeks to enhance inter-operability among digital assets, schemata/vocabularies/ontologies, metadata, and services. A key challenge is that the collections which must inter-operate are often distributed across individual, community, and institutional stores. To in part address this goal, the SIMILE team created Piggy Bank as an extension to the Firefox web browser that turns it into Semantic Web browser, letting you make use of existing information on the Web in more useful and flexible ways.

The Haystack Project is investigating approaches designed to let people manage their information in ways that make the most sense to them. By removing arbitrary application-created barriers, which handle only certain information “types” and relationships as defined by the developer, we aim to let users define their most effective arrangements and connections between views of information. Such personalization of information management will dramatically improve everyone’s ability to find what they need when they need it. This includes Piggybank as well as what they call the universal information client.

CSB: Who do you expect the early adopters to be, on the encoding side? Are there some there already?

TBL: Adobe is the only one I can talk about today, but there are others on the cusp of announcement.

CSB: Are specialized agents essential for the Semantic Web, or would adoption of semantic search capability by the Googles of the world be enough?

TBL: I think you're likely to see both.

CSB: What will the Semantic Web do to browsers? Will it be likely to strengthen the influence of the major browsers, or result in new entrants?

TBL: I think you'll see a bit of both here as well - revitalization of competition, and clear targets for functionality, but it’s a bit complicated. In short, browsers will be affected by the Semantic Web in many ways.

They may be pressured to become generic Semantic Web browsers. They may use Semantic Web metadata to accompany the human-oriented media. They may use Semantic Web metadata to select and marshal human-oriented metadata. There may be a very powerful client-side programming platform developed (as in Haystack, and RDF-Ajax applications) in which the client-side script sees the world and the display medium as a mass of RDF and SPARQL.

CSB: Do you believe that semantic encoding will become ubiquitous quickly, or will there be a two-layer Web for a long period of time (or perhaps permanently)?

TBL: It's not as if every page on the Web will be retrofitted with Semantic information. What we are likely to see though, is the wrapping of existing data stores, such as data in relational databases. We could anticipate a "View Data" feature in much the same way some of us "View Source". It's also worth noting that the new work in XHTML 2 is looking to include RDF capabilities.

There will always be on the web documents to be processed by people, and data to be processed mainly by machines. This is a feature, not a bug.

CSB: How much of semantic encoding can be automated?

TBL: Virtually all. This is like asking how much encoding of spreadsheet data is automated. Eh? These are data systems, not human writing systems. The data is all data, the encoding can only be automatic.

V. Infrastructure Questions

CSB: Are there standards that will logically be needed to reap the full potential of the Semantic Web that the W3C will not be appropriate to create, and if so, what purpose will they play?

TBL: There are dozens of organizations out there who are interested in developing their own ontologies, just as there continues to be demand for industry-specific XML vocabularies, on an even larger scale. W3C has recently announced the launch of an Incubator Activity, in which groups that have ideas that are good ones, but usually outside of the W3C purview – think of the development of a specific ontology, for example – can have space at W3C to discuss and develop their idea more fully.

I think it’s also important to note that great ideas can develop anywhere, in a variety of organizations. Our hope is that people who decide they want to grow an infrastructure piece of the Semantic Web will come to W3C.

CSB: Which organizations, logically, would create them?

TBL: Initially, it is wise to keep the Semantic Web developer community well in touch with itself, and the W3C is the center of that community. However, when the number of ontologies being built grows to become difficult to track, it will be essential for scaling reasons for the development work to move to organizations for specific fields. There is a bit of an analogy with XML, as the XML core and also early applications such as MathML were developed at W3C, but now XML schemas are developed all over the world where the needs are found.

CSB: Are these other organizations already engaged in that process, and do they see things the same way?

TBL: At this point, we do not have explicit dependencies on other organizations, though it could happen in the future. However, W3C is primarily focused on Web infrastructure, and in the case of Semantic Web, the Semantic Web infrastructure. Looking at the stack, it’s clear that some of the boxes have yet to be fleshed out. I think we will have diverse communities coming together to contribute their ideas and directions to the development of each of those components.

A practical example is the case of Rule Languages. They’ve been around for years – think business rules, or Prolog – but there has yet to be a sweeping dominant rule language standard, much less one that works on the Web. We recently held a Rules Workshop, and the first day was filled with people presenting their different ideas on Rules – in some cases, they might have been speaking different languages without an interpreter. On the second day, though, these diverse people were able to establish some significant common understandings through the discussion of use cases.

And I think the smaller orgs that focus on single markup languages, coupled with diverse users under the W3C umbrella is a great way to get these technologies developed.

There’s no stovepiping at W3C, and there are interoperability requirements that are firm – so we can be sure that if we take up a Rule Languages Working Group, it will have to work with, RDF, OWL, URIs, the Web architecture as it has been articulated at W3C.

CSB: Are there other technical trends or architectural goals in the evolving Internet and Web landscape that are working against achieving the Semantic Web, or is it still a clear playing field for you to work upon?

TBL: Vendors have limited resources in much the same ways their customers do, and so if they have committed enormous resources to things that are not Semantic Web, they have little room to move. But even here, we’re seeing a shift.

CSB: What will the use of agents querying for semantic data do to Web search speeds? Will it place additional demands on infrastructure?

TBL: The kinds of searches we'll be seeing may not be quite the same kind of searches we're used to doing today - in fact, I think we'll see more automation integrated into "search" type applications. We're also at a point where CPUs are not the obstacle, given their price and availability.

VI. User Questions

CSB: What would a browser that was optimized for the Semantic Web look like?

TBL: The Semantic Web is in use today behind several data aggregation sites providing richer and more expressive means if integrating data and delivering content. A simple example of a browser that would be optimized for the Semantic Web would allow one to "view data" that might be associated with any of these services and provide the means for save, reuse and integrating this in a variety of other ways with other applications. If one is attending a meeting, or booking an airline reservation, one could simply 'save' this information and drag this onto a calendaring application rather than the tedious effort of cutting and pasting. There is an entirely new set of applications we could imagine, with the only limiting factor being our imagination.

CSB: When will Web users begin to enjoy the benefits of the Semantic Web?

TBL: They already have, in applications that range from social networking (FOAF), content description (Adobe Creative Suite), learning about licensing constraints of Web content (Creative Commons), as well as the widespread use of OWL in a variety of disciplines.


1. This article does not focus on the technical details of the Semantic Web. To learn more about how the Semantic Web will work, see the following resources:

For the W3C Web Standards Activity Page (with links to the Semantic Web Activity Statement, Recommendations [approved standards], News and Events, Presentations, and much more), see:

For a detailed description of the technical basis for the Semantic Web and what it will make possible, see: The Semantic Web, Scientific American, May, 2001

The early vision (Tim Berners-Lee, September, 1998):

Comments? Email:

Copyright 2005 Andrew Updegrove




Andrew Updegrove

Abstract: Throughout history, the progressive acquisition of knowledge by mankind has involved five distinct tasks: learning, sharing, integrating, archiving, and then reacquiring (to start the process again) information, discoveries and ideas. The advent of innovations such as language, writing, printing, telecommunications and information technology have each dramatically increased the speed and volume of knowledge acquisition by optimizing our ability to accomplish one or more of these tasks. The Web now permits global, Web-based communities to combine all five of the traditional steps into a single democratic, merit-based, neural, real-time, ongoing, evolving, integrating process of knowledge acquisition. This revolutionary process may be referred to as “SuperIntegration/Creation,” and can be expected to have a profound impact on how we live, work and learn.

Introduction: The ability of humankind to take incredible things for granted is remarkable, and especially so where such things are in constant, everyday use. Thus it is that the significance of the Web is too often ignored rather than properly appreciated.

True, the enthusiastic chants of the Internet bubble years that “the ‘Net will change everything!” did prove to be the outrageous claims of the naïve, the irrationally exuberant, and the cynically opportunistic, at least with respect to the time frame within which that transformation was predicted to be realized.

The World Wide Web that was enabled by the deployment of the Internet, on the other hand, is another matter. The potential impact of the Web extends far beyond its ability to facilitate education and everyday information gathering. Instead, it represents the means by which the next great explosion of collaborative intellectual discovery will occur, and will enable the type of sudden and rapid advancement of the arts, sciences and even human society itself that has occurred only a small number of times in the past. A brief review of these previous great leaps forward, and the enabling discoveries that made them possible, serves to demonstrate that this prediction is not conjectural, but a certainty.

f this premise is accepted, then the nurturing of the Web, and the extension of its availability to all people everywhere, becomes not an ideal so much as an imperative. In that context, the implementation of the Semantic Web (see the Feature Article and Editorial in this issue) is an opportunity not to be neglected.

Innovations Enabling Advancement: It is a scientifically accepted conclusion that the humans of today are essentially identical (including in intellectual capacity) to those first modern humans that appeared upon the scene more than 100,000 years ago. And yet it was tens of thousands of years before homo sapiens discovered agricultural principles, began cultivating food crops and settled down in villages. From that point forward, the pace of knowledge acquisition increased radically. The result was the development of conceptual tours de force such as mathematics, advanced arts and science. In due course, the industrial age dawned, followed eventually by the computer age. Now, we are embarking upon what might be called the Age of the Web.

What enabled this accelerating rate of advancement?

The base dynamic was (and continues to be) the ability to build upon the information, discoveries and ideas that were acquired and developed by others, and particularly of those that have come before. This process depends upon the following steps:

  • Learning something new (acquiring new information, and reaching new conclusions)
  • Sharing the new information and discoveries (rather than hoarding it for personal advantage)
  • Integrating that information with other information and/or newly acquired information to a purpose
  • Archiving the information, in accurate form, for future generations
  • Reacquiring that archived information (i.e., knowing it exists and how to find it)
  • Learning something new as a result, and repeating the cycle

Each time there has been an important advancement in our ability to perform one (or more) of these process steps, there has been a commensurate advancement in our ability to build upon information, discoveries and ideas more productively and rapidly.

Until the last few decades and the wide deployment of computers, databases, networks and (most recently) the Web, browsers and search engines, dramatic advancement has been possible in only two of these steps: the ability to share information and the ability to archive it. And yet significant advancements in just these two processes have been matched by explosive advancement in human knowledge.

Let us briefly examine each of the major leaps that have occurred in pre-history and history to see how improvements in sharing and archiving have resulted in a complimentary leap in the advancement of knowledge acquisition, as a foundation for evaluating how the advent of the Web will enable the next great leap forward in the acquisition and utilization of knowledge

The first great leap forward: Language: The manifestation of human intelligence that gave rise to the first burst of advancement was doubtless language, which allowed the predecessors of modern humans to share information in greater detail and complexity than can other animals. Through language, not only could the location of necessities be conveyed from one individual to another, but so also could valuable discoveries, such as the observation that animals moved to certain places, and by certain routes, at predictable times of the year – and that certain natural phenomena could be used as indicators of when those times of year had arrived.

As a result, the knowledge acquired by individuals could not only be shared on a current basis, but could also be verbally passed down from one generation to another – the earliest advancement in archiving discoveries and strategies since the evolution of genetically programmed instinctual behavior and the ability to teach by visual example.

With the evolution of language, the human species was able to populate even the most inhospitable parts of the world, because inventions as humble (but significant) as the needle and sinew-thread enabled the development and use of warm, sewn garments. These innovations could be shared and passed down from generation to generation, and allowed the migration of bands of humans into progressively colder and more hostile environments. Other discoveries had impacts of similar importance.

But language, without a more reliable means of archiving and more established lines of communication, was an imperfect foundation upon which the progressive advancement of knowledge could be based. It is believed that in prehistory, as today, hunter-gatherer societies were based upon bands of only a few families, which met perhaps once or twice a year with other, related bands totaling a tribe of perhaps a few hundred individuals. Thus, the potential to share new discoveries was systemically limited to, at most, that number of individuals. The discoveries of another tribe could be learned through the occasional capture of members of that tribe and through trading relationships, but in the former case the willingness to accept the customs of enemies might be limited, and it may have been many thousands of years before human population density and sophistication developed to the point that trade became conventional.

Finally, when new discoveries were made, droughts and other natural disasters could befall the individual bands and tribes that had made these discoveries, resulting in their loss.

Useful discoveries might also fall into disuse (and therefore become lost), if they became superfluous under changing conditions. For example, we know that humans reached Australia more than 50,000 years ago – and that the nearest land to Australia at the time was at least 55 sea miles away. As a result, a society necessarily existed at that time that was capable of building sophisticated boats capable of permitting a pioneer population to island-hop its way down the archipelago of Southeast Asia, often venturing forth to seek land that could not be seen over the horizon.

And yet that impressive technology was lost (or became superfluous) in the years that followed: by the time that Europeans arrived tens of thousands of years later, the aboriginal inhabitants of Australia were leading a far more basic hunter-gatherer existence than their predecessors may be presumed to have pursued. Some anthropologists now believe that a similar phenomenon occasioned the first peopling of North America via boat perhaps 30,000 years ago (or more), via an Arctic route.

Thus, without a reliable means to share knowledge broadly within a single generation, or to pass that knowledge from one generation to another, there was scant opportunity to advance learning by building discoveries upon discoveries, or to permit abstract thinking to evolve. How many Newtons and Einsteins were wasted in the last hundred thousand years as a result? And how many innovations (like the lost boats of the Australasian pioneers) were developed by gifted individuals, that have been lost forever from view?

Only with the eventual settling of bands and tribes into year-round locations as a result of the advent of agriculture did a more structured society evolve, and with that development, the persistent need to fix information in a tangible medium and the conditions in which a technique for doing so could be conceived.

The second great leap forward: Writing The advent of writing was the second revolutionary innovation that truly “changed everything,” as it not only hugely advanced the state of the art of archiving, but also radically expanded the ability to share information, ideas and discoveries across distance and time.

While initially writing may have been limited to recording commercial data (in the Fertile Crescent) or the heroic deeds of kings (in Mesoamerica), once conceived, its utility for other purposes was rapidly realized. Now, the preservation of useful information, discoveries and ideas was no longer solely dependent upon the incentive and opportunity to convey them verbally from generation to generation, but the integrity of data and conclusions could be preserved as well, even if their custodians made no current use (or even remembered the existence) of these writings.

With the ability to preserve information, ideas and discoveries, came a much richer opportunity to integrate new information and discoveries with old, to prove or disprove earlier theories and build upon the results, and to base new ideas and theories on those that had come before. For the first time, data and ideas could be accurately transported, and seekers of knowledge could travel to the site where valuable information was known to be maintained. One result was the opportunity for explosive advancement in abstract thinking in areas such as philosophy, mathematics and astronomy, as well as practical disciplines like irrigation, construction and ceramics.

Over the next several millennia, where stable societies existed over long periods of time (as in Sumer, Babylonia, Egypt, Greece and Rome), rapid advancement occurred in multiple areas of knowledge. With increasing trade between broadly separated peoples over established land and sea routes, copies of archived knowledge could (and were) increasingly shared, resulting not only in wider derivative innovation, but in the preservation of learning (at least in part) when disaster befell, as tragically occurred with the destruction of the Royal Library of Alexandria.

The third great leap forward: Enhanced Availability The innovation that enabled the next great leap forward was not revolutionary but evolutionary: the invention and proliferation of the printing press. With the ability of information, discoveries and ideas to be disseminated and shared more broadly, many more individuals had the opportunity to integrate information, and contribute their own intellectual discoveries to build upon the work of others. With the creation of more copies of any single source of knowledge, finding their way into archives, vulnerability to loss was vastly diminished. And finally, with books becoming more available and affordable, the ability to read became a skill more worth acquiring.

Soon, institutional libraries became more commonplace, as did private collections of books. With the new wealth of access and more opportunities to obtain an education came a greater opportunity for original and sophisticated thinking, and more like-minded individuals with whom to carry on inquisitive discourse. True, those engaged in science had used Latin as a universal language for centuries to engage in the extensive (albeit slow) international sharing of information, discoveries and ideas. But there was also great deference to the conclusions of those that had come before – even thousands of years before, in the case of the Greek philosophers, anatomists and physical scientists. With the dawn of the Enlightenment, however, a new willingness to question and hypothesize took hold, conjoined with a great age of discovery that generated a wealth of information that cried out for interpretation.

The fourth great leap forward: Simultaneous Sharing The next technical advancement was evolutionary in one sense, and revolutionary in another, and began with the electrical transmission of the first telegraph message. Now, information could be shared not only universally, but also at the same point in time and in substantially uninterpreted form, allowing multiple recipients to integrate that information, and form and then share conclusions based upon the same raw information. As wire services were established, and as newspapers bought and disseminated the information they provided, the first age of the “information silo” began to fade and a contemporaneous, wide area network of information availability began to evolve. With the invention of technologies such as microfiche and the willingness of libraries to archive newspapers, the ability for future historians to access raw, contemporaneous data for ongoing, independent analysis, integration and interpretation also expanded exponentially.

While evolutionary in the sense that electronic transmission was merely a faster form of delivery of information than letters, the immediacy and simultaneity of access that the telegraph made possible was revolutionary. And by globalizing and multiplying the places in which raw information could be shared and archived, the same data could be interpreted and creatively integrated from more and different perspectives than had ever been possible before.

With the subsequent invention and spread of the telephone, not only could remote data be accessed globally, but direct contact could increasingly be established with first hand witnesses. And with the eventual spread of television and the deployment of satellite feed technology and on-location camera crews, even first hand information acquisition on a mass basis became possible, albeit on a selective (and sometimes censored or slanted) basis.

The fifth great leap forward: New Learning, Reacquiring and Integration Tools With the development of cheap, powerful and available computing power and the design of database technology, the stage was at last set for dramatic progress on the integration and reacquiring process steps. Now, vast quantities of data could not only be searched, but relationships could be discovered and modeling performed through automated processes, enabling tasks to be performed in seconds that might have taken years to accomplish before, if they could have been performed at all.

The result was that new types of discoveries became feasible that would have been impossible to make but a short time before, and theories could now be substantiated that necessarily would have remained theories indefinitely absent the means to test them. At the same time, rapid advancement in a bewildering array of learning tools (electron scanning microscopes, space probes, genomic sequencing techniques and so much more) created a supernova of new information for information technology to integrate and archive in a manner that would permit for easy future, automated reacquisition.

As the Twentieth Century approached its close, then, the infrastructure and tools that enable information, discoveries and ideas to be learned, shared,integrated,archived, and reacquired had made astonishing strides, reaching what seemed to be the ultimate platform upon which future human progress could be based for all time.

And then, the Web “changed everything” again.

The sixth great leap forward: The Unity of all Information, Discoveries and Ideas With the advent of the Web (and then the first graphical browsers and search engines), the ability to learn, share, integrate,archive, and reacquire information, ideas and discoveries truly exploded. Today, given a search engine, the most inexpensive of personal computer systems and some sort of telecommunications access, anyone anywhere in the world may access orders of magnitude more information in an instant than most human beings could hope to view in a pre-Web lifetime. And with the contemporaneous progress that has been made in telecommunications and the build-out of the Internet, all of the power of information technology may be deployed on a globally shared basis as well. With continued progress in areas such as Web Services and Network Centric computing, the ability to productively wield this power will continue to increase.

But something much more profound has become possible by means of the Web, and it is this capability that can be expected to provide the next, and perhaps the most radical, advancement of human knowledge and society.

In human terms, the access provided by the Internet and the information provided by the Web represent the single greatest avenue for equality of advancement by the individual in the history of the human race. And, for the same reason, these tools also represent the greatest opportunity for the advancement of knowledge in human history as well.

At the most basic level, making the Web available to all of humanity is akin to packing more transistors on a single chip. Whereas twenty years ago a few tens of millions of people in the entire world had access to first-hand academic libraries at one time, now there are over a billion people that can connect to millions of pages of information, discoveries and ideas of comparable quality. Just as the power of computer chips today is immeasurably greater than that of those deployed in the early days of the silicon revolution, the mental processing power of the planet will explode as a billion – and soon more – individuals can access “system resources” through the Web, as well as conjoin their intellects in what amounts to the parallel processing integration of all existing knowledge to spawn new ideas and make new discoveries. And no more need the intellects of potential Einsteins and Newtons be limited in their ability to flourish, regardless of where they may live, if connectivity is within reach.

The results will be even more dramatic, because the Web, enabled by the Internet, dramatically facilitates the operation of each of the five key steps that have led to the advancement of knowledge throughout human existence. Consider the following brief summaries, each of which captures only the spirit of the revolution that has occurred:

       Learning: Those that live in the First World can locate data in minutes that would have taken hours to acquire in traditional libraries, and those in the Third World who have never had access to libraries may now acquire knowledge that might have remained hidden from them for life. New ideas are being formed on a constant basis using on-line resources; many of these ideas would never have been formed absent the wealth of information, discoveries and ideas that can be accessed on the Web. Even the quantity (if perhaps not the quality) of ideas being formed has surely increased, prompted by the incentive of sharing ideas and insights using this new medium (Blogs are the most recent example of this phenomenon; the ideas presented at this Website are another).

       Sharing: Examples: People who have not written a letter in decades send hundreds (and even thousands) of words a day to specific recipients, as well as to unknown numbers of readers at chat rooms, discussion boards and other electronic destinations. Scholarly work that used to be available only in expensive journals found only in academic and urban libraries is increasingly posted on public sites. Unique resources that until now were made available only to qualified scholars by appointment, such as the notebooks of Isaac Newton, are being scanned and added to the Web, so that anyone can study Newton’s genius as manifested in the details of his thoughts as they were recorded in his own hand. Enormous amounts of information of great value, such as aerial photos, topographic maps and statistical data can be downloaded that formerly would have been available only in a tangible, proprietary, costly form. Open source and Creative Commons licensing models are being more and more pervasively used, augmenting a new on-line ethos of no-cost sharing that has heretofore been unknown outside of academic circles.

       Archiving: Not only are untold thousands of pages of information, discoveries and ideas added to the Web each day, but this new material is being acquired and indexed on a constant basis by search engines. In the scant space of a decade, virtually every category of data by subject matter and by media type has either been added to the Web, or is in the process of being added. Entire libraries of material are in the process of being scanned and posted, resulting in the imaginable consequence that in another decade perhaps the majority of what is worth accessing may well be accessible.

       Reacquisition: Traditional methodologies have been utilized to allow the efficient “local” reacquisition of information, discoveries and ideas (e.g., existing indices and bibliographic information for journals are available at hosting sites), and the first generation search engines of today provide a useful, if not highly precise, means of reacquiring what has been archived globally. New methodologies – such as the Semantic Web – are being created on an ongoing basis as well, each of which is intended on a macro or a specialized basis to increase the intelligence with which information, discoveries and ideas can be reacquired from the Web. Presumably, this is the technical area where the greatest continuing technical advancement will occur, in order to make more useful the riches that are constantly being added to the resources that are already available on the Web.

The big one: “SuperIntegration/Creation” I have purposely left integration until last, because it is at this level that the revolutionary aspect of the Web is most manifest, and at which historical labeling becomes inadequate, requiring a new term to describe what is happening in a multitude of new peer-to-peer, collaborative, on-line communities. Let us adopt “SuperIntegration/Creation” as that term. SuperIntegration/Creation may be thought of as being simultaneously both a verb and a noun, and can be defined as follows:

The democratic, merit-based, neural, real-time, ongoing, evolving, sharing, integration and archiving of all information, discoveries and ideas that, on a Web-wide basis, represents our entire understanding of a subject at any point in time.

SuperIntegration/Creation is occurring all over the Web, in venues as diverse as open source projects and the compilation of the Wikipedia (the extremely successful, non-profit encyclopedia project). The elements of the definition above are worth explaining individually, using the Wikipedia as an example.

SuperIntegration/Creation is:

       Democratic, because the Web knows no boundaries. Unless an (increasingly rare) password stands between a user and a site, language alone is a barrier to entry, and anyone may participate. Many on-line projects have little or no hierarchy of authority or decision-making. Not only can anyone launch or add to a Wikipedia entry, subject only to a level of quality control, but the Wikipedia is multilingual, with 33 different language versions in existence today that have 5,000 or more (and as many as 60,000) entries.

       Merit-based, because participation in many Web-based communities is not only resume-neutral, it may even be anonymous. Many projects do not identify individual authors of any of the constituent parts of their work product. Thus, participation and inclusion of individual output is based on the community’s opinion of the knowledge and skills of each community member rather than on that individual’s credentials or titles in the real world. At the Wikipedia, anyone can launch or add to an entry. If the quality of their contribution is judged to be high, then the entry stays.

       Neural, because a large, varying and concurrent number of participants can join in a process, each of whom plays a collaborative part. No one “owns” a Wikipedia entry; anyone can add to an existing entry – and does, adding new information, sub-topics and insights unbounded by the conceptions of any single author.

       Real-time, because an increasing number of Web-based projects have no defined workday, or referential time zone (the Wikipedia never sleeps).

       Ongoing, because many Web-based projects have no reason for an end date – they are processes more than they are projects, and because they are not fixed in tangible media, they may have (but technically need not) even have version or edition numbers. As new information becomes available and new events occur, they can be folded into the project. The Wikipedia by definition will never be “finished” any more than will the Linux kernel.

       Evolving, because many Web-based projects involve not just information, but also discoveries and ideas as well. Thus, even historical data may be reinterpreted, or act as the basis for new discoveries. Not only are new Wikipedia entries being added on a constant basis, but each individual entry becomes its own micro SuperIntegration/Creation project, limited only by the number and enthusiasm of those that have an interest in a that topic. See for example, the detailed entry on World War I with its comprehensive outline and hundreds of out-bound links. But for a real example of the cultural richness and enthusiasm of the Wikipedia, see, the current index for the beloved, wise (and now retired) comic strip, “Calvin and Hobbes” reproduced at left.

1 History
      1.1 Syndication and Watterson's artistic standards
     1.2 Merchandising
2 Style and influences
3 Setting
4 The main characters
     4.1 Calvin
     4.2 Hobbes
     4.2.1 Hobbes' reality
5 Supporting characters
     5.1 Recurring characters
     5.1.1 Calvin's dad
     5.1.2 Calvin's mom
     5.1.3 Susie Derkins
     5.1.4 Miss Wormwood
     5.1.5 Rosalyn
     5.1.6 Moe
5.2 Infrequent or background characters
6 Recurring themes
     6.1 Calvin's alter-egos
     6.2 G.R.O.S.S.
     6.3 Mealtimes
     6.4 The cardboard box and Calvin's inventions
      6.5 Snowmen
      6.6 Art and academia
      6.7 Wagon and sled
      6.8 Calvinball
      6.9 Snowball and water balloon fights
      6.10 Lemonade stand
      6.11 Camping trips
      6.12 School and homework
      6.13 Items left to the reader's imagination
7 Calvin and Hobbes books
8 Related articles
9 External links

SuperIntegration/Creation has been in existence as an unconscious social process for as long as humanity has existed (culture is one name we give to it), but it has not been capable of existing in as comprehensive a form as the Web permits since human beings were hunter gatherers, when each band could participate in the creation and maintenance of, and hold in its collective consciousness, all that was known at that point in human history.

Web-based SuperIntegration/Creation is revolutionary in that it simultaneously, rather than serially, incorporates all of the process steps that I have addressed in this article: those that participate in a Web-based process at once are learning, sharing, integrating, archiving – and then learning again. What a Web-based community is experiencing therefore is both a process (the verb) and the instantiation of what at that point in time of what that community collectively knows, is thinking, and is discovering (the noun).

Taken together, the Web itself is a vast, organic SuperIntegration/Creation project -- what a French Jesuit philosopher named Pere Pierre Teilhard de Chardin in 1925 presciently called the “Noosphere:” the layer of all human knowledge and culture surrounding the globe.

Summary: Are the products of collaborative projects like the Wikipedia superior to their historical antecedents (such as encyclopedias)? In some cases yes (e.g., art projects) and in some cases no (e.g., Wikipedia type-projects that demand involve scholarship, without suitable quality control). The important point is that they are different in important ways that will yield results that would not be possible by normal, step-wise, centralized, more hierarchical and self-selecting methods.

Conclusions: At the dawn of mankind, sharing of knowledge was universal within small groups, but the acquisition of knowledge was extremely slow, and that knowledge was vulnerable to loss. Over time, successive cultural and technological innovations enabled sudden leaps in mankind’s ability to learn, share, integrate, and archive information, discoveries and ideas. While this process was orderly, it was also step-wise (and therefore time consuming) and elitist, in that the ability to participate was limited to those with the requisite resources, social privileges, proximity and education.

The advent of the Web and the process/reality of SuperIntegration/Creation that it enables, “changes everything” about how we acquire and use knowledge to a greater extent than perhaps any prior innovation since the acquisition of language. This new Web-enabled ability to think and exist within an evolving knowledge process, rather than to individually “know” only an erratically updated body of knowledge, is likely to have profound consequences on how we learn, think, and make new discoveries. Similarly, the lack of hierarchy and freedom of participation in many SuperIntegration/Creation settings upsets many traditional age, title and education-based power relationships.

As time goes on, one can expect more and more communities to arise on the Web, and these communities will engage in ever more challenging SuperIntegration/Creation processes. In the near future, we may spend more and more of our education, work and recreational time in such settings.

Perhaps, this latest revolutionary leap in how we acquire knowledge, as with our acquisition of the gift of language so many millennia ago, may even change what it means to be a human being.

Comments? Email:

Copyright 2005 Andrew Updegrove





ANSI-HSSP Addresses Homeland Security Standardization


On March 23 of this year, a very successful meeting was held (co-sponsored by ANSI and Gesmer Updegrove LLP) and leaders of 21 consortia. The purpose of the meeting was to brief and receive input from the management of ITC consortia on the revision of the United States Standards Strategy.

Now, ANSI is hoping that representatives of the many consortia that are actively engaged in developing security and other standards relevant to Homeland Security concerns will participate in another ANSI activity: the Homeland Security Standards Panel (HSSP). We are pleased to bring this invitation to your attention, and hope that the consortium community will take part in this activity.

The American National Standards Institute (ANSI) Homeland Security Standards Panel (HSSP) recently celebrated its two-year anniversary of serving as an important public-private sector coordinator in the homeland security standards arena. The Panel has as its mission to identify existing consensus standards, or, if none exist, assist the Department of Homeland Security (DHS) and those sectors requesting assistance to accelerate development and adoption of consensus standards critical to homeland security. The Panel issued a " two-year accomplishments " document that details the progress that has been made to date via its workshops on specific homeland security subjects, as well as areas for further exploration.

ANSI-HSSP workshops have already been convened to examine standardization in the areas of private sector emergency preparedness and business continuity, biometrics, biological and chemical threat agents, and citizen preparedness. Work is ongoing via workshops and task groups for the areas of training programs for first response to WMD events, emergency communications, enterprise power security and continuity, and perimeter security.

The next ANSI-HSSP plenary meeting will be held September 29-30, 2005 at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. In addition to general homeland security standards presentations and discussion, three breakout sessions will be held on the following subjects: Chemical, Biological, Radiological, Nuclear and Explosives (CBRNE); Border & Transportation Security (BTS); and Emergency Preparedness and Response (EP&R) and Infrastructure Protection (IP).

The Panel is seeking increased participation from fora/consortia that are involved with homeland security standardization. Any organizations wishing to join or participate in upcoming meetings are encouraged to contact Matt Deane ( 212-642-4992), ANSI-HSSP Program Manager. Further information on the group and its activities is also available on its website, .



#29 For Your Reference    One of the oldest functions of government (extending back to the dawn of recorded history) is the establishment, maintenance, and policing of weights and measures. So important is this role that the framers of both the original Articles of Confederation of the United States, as well as the Constitution that replaced it, included the right to “fix the standards of weights and measures” in the powers that the citizens of the new nation granted to their government (see article 1, section 8 of the latter document).

The antiquity and utility of weights and measures is in part attributable to the fact that they can be represented by physical objects. There is perhaps no more intuitively understandable standard than a reference weight in a balance scale: this much sugar in one pan of a scale is equal to the kilogram weight in the other pan.

Not only weight, but other qualities of goods, of course, are significant as well, such as purity, ductility, optical clarity, and composition (as in alloys) to name but a few. Governments regulate many of these properties for a variety of reasons, from concern over public safety to facilitating their own procurement activities. In each of these cases, carefully manufactured and measured physical examples of standards (referred to as “reference materials”) can be created and used in the same conceptual way as weights in a scale.

So it is that the United States government has developed reference materials for additional purposes beyond “fixing the standards” of common, everyday weights and measures. In fact, it has developed a lot of reference materials, including quite a few that might never occur to you. Take “Multi Drugs of Abuse in Urine,” for example (3 bottles; $336.00).

These materials have been established by the National Institute of Science and Technology (NIST), and you can order them through NIST’s helpful website (major credit cards welcome). But touring that Website, as with so many other locations on the Web, can be a somewhat surreal experience.

Most of the thousands of materials that you will find there are rather mundane chemicals, alloys and the like. But what should we make of Toxic Metals in Bovine Blood (set (4), $455.00)? What, we might ask, is the purpose of that reference material? Or perhaps we’d rather not know.

A number of sets of materials are equally intriguing. One collection relates to pollutants: domestic, commercial and as they have spread in the natural world. Consider, for example, the pleasantly alliterative Priority Pollutant PAHs (set (5) $296.00), or the enigmatic Urban Dust (2.5 g, $324.00). Or, more ominously, River Sediment (Radioactivity) (85 g, $398.00). Or my personal favorite: Domestic Sludge (40 g, $371.00).

Trying to guess the rationales for specific selections is a mystery in itself. Why, for example, can one purchase New York/New Jersey River Sediment A (50 g, $527.00), but there is no entry (for example) for “New Hampshire/Connecticut River Sediment B”? And what’s the deal with San Joaquin Soil (50 g, $314.00), among all other soils?

Rocky Flats Soil #1 Powder (85g, TBD) may ring a bell, and is therefore easier to figure out, but only up to a point. After all, Rocky Flats is the site of a former nuclear weapons factory, and is the subject of a massive cleanup effort. But once having made that connection, a different question arises: Are you allowed to be as radioactive, but not more so, than Rocky Flats soil? Perhaps the reference soil has already been remediated (or so I hope).

Some materials are simply pleasant to read. There are, for example, Waspaloy (disk, $449.00) and Equal Atom Lead (1 g wire, $278.00). Google them if you must, or just repeat them to yourself, like the cartoon-character Zippy (Equal Atom Lead! Equal Atom Lead!)

Others are a challenge to pronounce, such as Adipate and Phthalates in Methanol (5x1.2 mL, $312.00). Or just plain intimidating. Take, for example, Non-Newtonian Polymer Solution for Rheology – Polyisobutylene Dissolved in 2, 6, 10, 14-Tetramethylpent (100 mL, &736.00).

Then there are a few that are perky and fun, such as High-Energy Charpy (set $489.00), Scheelite Ore (100 g, $220.00) and Sugarcane Bagasse (50 g, $193.00). Who cares what they are used for?

For a bit of tranquility in the middle of an otherwise technical list, it’s also good to know that there is a place on line where you can replenish your supply of Apple Leaves (50 g, $348), albeit at the cost of a bit of sticker shock. Or perhaps a small supply of Peach Leaves (50 g, $337) is just right for you today.

Tranquility might be in short supply after visiting the selection of radioactive materials that NIST is apparently happy to ship to you. How about a bit of Plutonium-238 Solution (5 mL, $788.00)? Or, for the more cost-conscious, NIST is offering it’s popular Curium-243 Solution (5.1 g, $451.00).

If neither of those brings a warm glow, there are powders, solutions and other preparations of Cesium-137, Cobalt-57, Europium-152, Strontium-90, Barium-133, Uranium-232, Americeum-241, Thorium-229, Plutonium-242, Radium-228, Neptunium-237, Thorium-230, Yttrium-169, to name only a sampling of NIST’s impressive product line in this category. (Sorry, though: Radioactive materials cannot be ordered online. Please fax your order to 301 948 3730.)

Had enough? Don’t go yet. You haven’t visited the food pavilion!

If you’re just starting out, perhaps the Typical Diet (set (2) $675.00) is right for you. For the more sophisticated, Meat Homogenate (4x85, $428.00) may be just the thing, or the popular Oyster Tissue (25 g, $573.00), or perhaps Trace Elements in Spinach Leaves (60 g, $425.00). (It would be best if you didn’t mention the Organics in Whale Blubber (2x15 g, $378.00) to Greenpeace, though).

Oh – on a diet? Perhaps a visit to the forensic section will take your mind away off junk food. Over here, for example, we have Ashed Bone (Radioactivity) (15 g, $475.00), as well as Human Lung Powder (45 g, $404.00). No, I don’t want that Twinkie, either.

If its not forensics but law enforcement that turns you on, then you might want to outfit your home lab with Cocaine + Metab in Urine (Set (4), TBD), or perhaps the elegantly named Drugs of Abuse in Human Hair I (100 mg $554.00). Use the state-of-the art e-commerce technology at the NIST site to add any of these (and much more!) to your shopping cart.

Maybe the environment is your game, and PCB’s in Human Serum (set (3), TBD), or Respirable Alpha Quartz (5 g, $404.00) is what you need. Watch out for the Toxic Metals in Freeze-Dried Urine (set (4) TBD) though; that one’s nasty.

You may be all shopped out by now, but if you’re still looking for something for the Man Who Has Everything, then the Artificial Flaw for Eddy Current (each, $605.00) may be just what you need.

Time to go? Best to bookmark the page before you do. After all, the holiday season will be here before you know it.

Comments? Email:

Copyright 2005 Andrew Updegrove

The opinions expressed in the Standards Blog are those of the author alone, and not necessarily those of
Gesmer Updegrove LLP

Postings are made to the Standards Blog on a regular basis. Bookmark:


For up to date news every day, bookmark the
Standards News Section

Or take advantage of our RSS Feed

Government Action

It should be clear that a particular technology is covered by a patent before a standard is set, so this can be taken into account when deciding whether to set the standard [May 31, 2005]
Spokesman for the European Commission's Competition Commissioner

What’s going on here? Usually, when a government enforcement agency opens an investigation involving standard setting activities, the line of attack is predictable. That’s why it was startling when news was leaked last week that the EC’s antitrust arm is looking into whether the intellectual property (IP) policy of ETSI, the influential European telecom standards body, is too loose to prevent “submarine patents” (e.g., patents that are hidden by their owners until a standard has been adopted and widely implemented). In the United States in recent years, the courts have taken a “hand’s off” approach to standard setting rules (as the defendants in the Rambus litigation found to their chagrin), allowing standards organizations to have as tight or loose an IP policy as they wish. Investigations, when they have been brought at all, have been brought against game-playing participants, and not the standards organizations themselves.

While ostensibly requiring tight IP policies is a good thing, a legal requirement to tighten up IP policies would have an earth-shaking impact on the standards setting world, where current policies are more permissive than prescriptive, and amending a policy can be laborious. One reason that this is so is because companies with large patent portfolios that are active in many standard setting organizations wish to avoid the cost of performing endless patent searches to ensure reliable disclosure. We will monitor this situation closely, and report on it on a current basis at our News Portal as further details become available. So far, facts are few – including who leaked news of the investigation (and why).

Telecom standards face patent ambush threat
By: Ingrid Marson, June 15, 2005 -- The European Commission is investigating Europe's main telecoms standard-setting body due to concerns that a flaw in its procedures could allow companies to carry out a 'patent ambush'. A spokesman for the EC's Competition Commissioner told ZDNet on Wednesday that it is investigating the European Telecommunications Standards Institute (ETSI), an independent organisation that sets standards in Europe, to ensure that its procedures do not allow anti-competitive behaviour. "We have an ongoing investigation. ...Full Story

New Standards, Initiatives, etc.

Law enforcement agencies don't want to stand in line at our jail and do data entry. [May 31, 2005]
John Doktor, technical director of the Integrated Criminal Justice Information System agency for Maricopa County

More than one way to skin a (semantic) cat: Semantics can be utilized usefully in information technology in more ways than one. And while the Semantic Web is getting the most attention in the media (and in this journal as well) at this point in time, it is not the only initiative underway that is intended to permit computers to make better use of the meaning of words. The following article reports on an effort to use semantic logic and artificial intelligence to guide manufacturing processes.

Software Advance Helps Computers Act Logically
NIST Tech Beat, June 16, 2005 -- Computers just respond to commands, never “thinking” about the consequences. A new software language, however, promises to enable computers to reason much more precisely and thus better reflect subtleties intended by commands of human operators. Developed by National Institute of Standards and Technology (NIST) researchers and colleagues in France, Germany, Japan and the United Kingdom, the process specification language software, known as ISO 18629, should make computers much more useful in manufacturing. ...Full Story

XML -- here, there (and everywhere!) What standard generates 26.409903 times more hits on the Web than “george bush”? Why “XML”, of course. And as the selection of articles that follows shows, it’s no wonder, as XML’s utility continues to expand unabated. In no particular order, the following articles report on new XML initiatives in areas as diverse as publishing, justice/Homeland Security (where as many as 200 schema may exist), open source office suite software, AMBER Alerts…and auto racing statistics.

Architecture for Publishing
BusinessWire, Boston, MA, June 1, 2005 -- the international e-business standards consortium, today announced that its members have approved the Darwin Information Typing Architecture (DITA) version 1.0 as an OASIS Standard, a status that signifies the highest level of ratification. DITA defines an XML architecture for designing, writing, managing, and publishing many kinds of information in print and on the Web. DITA consists of a set of design principles for creating "information-typed" modules at a topic level. DITA enables organizations to deliver content as closely as possible to the point-of-use, making it ideal for applications such as integrated help systems, web sites, and how-to instruction pages. ...Full Story

XML: Out of the Shadows
By: Jim McKay
Government Technology, May 31, 2005 -- Global Justice XML may link law enforcement, firefighters, emergency management services and more. The federal departments of Homeland Security and Justice recently agreed on a global data-sharing standard that could spur interoperability throughout the public safety community and beyond. The move limits proliferation of incompatible XML data models -- which translate data into information that can be shared among multiple IT systems -- and opens the door to greater cooperation among law enforcement, firefighters, health organizations and others. ...Full Story

OASIS Approves OpenOffice 2.0 File Format
By: Steven J. Vaughan-Nichols
eWeek, May 23, 2005 -- OASIS, the international e-business standards consortium, announced on Monday that it has approved the Open Document Format for Office Applications Version 1.0 as a standard. OpenDocument (Open Document Format for Office Applications) is the new default XML-based file format for the forthcoming open-source office suite 2.0. Although based on the 1.x format, which was submitted to OASIS (Organization for the Advancement of Structured Information Standards) in 2002 by Sun Microsystems Inc., OpenDocument is not compatible with the OpenOffice 1.x formats. ...Full Story

AMBER Alerts Distribution Expands Across Wireless Telephone Networks
By: Corey McKenna
Government Technology May 27, 2005 -- Several wireless telecommunications companies have announced that they are adding capabilities to their networks to support the distribution of AMBER Alerts to cell phones. The move to connect their networks to the AMBER Alert system and give subscribers the option to receive AMBER Alerts as text messages on their phones will enable local law enforcement to reach millions with AMBER Alerts. The AMBER Alert Web Portal expands the types of information that can be distributed as well as the devices that can receive them. The data, including pictures, text and biographical information, is encoded in XML, which enables it to be received by a multitude of electronic devices including cell phones, email, PDAs, pagers, fax, lottery machines, and other standard communication devices in as quickly as ten minutes after an AMBER Alert has been declared. ...Full Story

See Also:

IPTC: Extended Scope of Sports Data Standard - and a New Chair of the Board
BusinessWire, Windsor, UK, June 16, 2005 -- Automobile racing statistics have been added to the growing roster of sports that are supported by XML standards of the International Press Telecommunications Council. More than 40 IPTC delegates representing member news agencies and news system vendors voted unanimously to add auto racing support to SportsML (, the standard language for sharing sports data among news agencies, sports data providers and other sports data users. Race entries, lap data, results and drivers winnings are among the rich array of statistics supported by the new automobile racing module. ...Full Story

And now, IBM presents Open Hardware computing: Formerly IP-centric IBM has been full of pleasant surprises since it developed its current Open Source/Open Standards strategy. IBM’s making 500 of its patents available for open source purposes garnered wide press coverage, but two other initiatives that received less notice (reported on below) involve IBM’s release of not just software specifications, but hardware designs as well. This new willingness by IBM to open its IP kimono is not only provocative, but represents an even more dramatic about-face, given from IBM’s historically iron-centric culture.

Power.Org on the Road to Open Hardware
By: Tony Lock, June 15, 2005 -- Truly seminal events occur very, very rarely. This is as true of IT as in any area of life. Very few of them are recognised as such at the time. The formation of the Power.Org community may possess the potential to influence the way core IT hardware develops in the same way that the Open Source movement has altered the evolution of software models. ...Full Story

IBM opens up its Cell to boost digital computing, June 10, 2005 -- IBM is planning to provide the open source community with the key hardware/software specifications of Cell, which is a chip that has been used by IBM, Sony and Toshiba to provide computer processing ten times more powerful than today's PCs. IBM believes that the release of this code will jump start the creation of innovative new applications and a thriving ecosystem around Cell, especially in medical imaging, video processing, High Performance/Scientific computing, security, defence, and other industries that need massive graphics and visualisation processing power, provided by Cell. The chip is currently used in Sony's new Playstation 3 and could create a new generation of digital services, both large and small, according to Big Blue. ...Full Story

What’s the straight scoop on wireless, anyway? Well, that’s a question upon which reasonable (not to mention biased, self-interested, or just plain unreasonable) advocates, pundits, investors, and vendors can differ. Will WiMax predominate, or not? Will agreement be reached soon on an ultra-fast 802.11n standard (or not?) Will Bluetooth and UWB become an item? The USB (yes, that’s “USB” not “UWB”) standard has been completed, but will it be adopted? Are wireless standards a flourishing garden of opportunity, or, as the Executive Director of the UWB (yes, that’s “UWB” not “USB”) Forum recently opined, “a complete and utter mess?” Read on and see what you think – your guess may be as good as anyone’s.

WiMax Is Coming, Say Execs At Supercomm
By: Elena Malykhina
InformationWeek, June 7, 2005 -- The buzz around WiMax technology isn't all hype, industry executives told attendees at a Supercomm 2005 conference session Tuesday. What's more, equipment and service providers are starting to deliver products for mobile WiMax, which offers more roaming flexibility over the traditional fixed WiMax. WiMax is an industry move to a standards-based use of wireless technology, and the Federal Communications Commission has several initiatives in the works to make it a reality. ...Full Story

802.11n Wireless Standard Deadlock Continues
TelecomWeb, May 24, 2005 -- The IEEE committee attempting to hammer out and agreement on the new 802.11(n) wireless standard has yet again failed to agree on what the high-speed wireless standard should look like. The two remaining warring camps were in a voting deadlock late last week at a meeting in Cairns, Australia, where hopes were that a standard would finally be ratified. 802.11n is to define technology that calls for a minimum of 100 Mb/s wireless networking and holds out the promise of 300 Mb/s. The 802.11n standard process, which initially started out with a clutch of competing proposals, has now boiled down to two – the World Wide Spectrum Efficiency (WWiSE) consortium and TGn Sync – each backed by competing powerful players. ...Full Story

Bluetooth looks to UWB for bandwidth
By: Stephen Lawson
ComputerWorld, May 31, 2005 -- The body that oversees the Bluetooth personal-area wireless specification wants to take advantage of emerging UWB (ultrawideband) technology to create fast networks that are backward-compatible with current Bluetooth products. The Bluetooth Special Interest Group (SIG) has announced its intention to work with the WiMedia Alliance and the UWB Forum, which are promoting two different UWB technologies. UWB is designed to deliver much greater bandwidth than a WiFi wireless LAN, but over a distance of only a few metres. ...Full Story

Wireless USB group finishes 1.0 specification
By: Stephen Lawson
ComputerWorld, May 25, 2005 -- A cable-free version of Universal Serial Bus took a big step forward on Tuesday with the completion of the Wireless USB 1.0 specification, but there is still some work to be done and questions remain about its prospects for widespread adoption. The specification was created by the Wireless USB Promoter Group, a league of seven vendors that includes the heavyweights of the PC universe: Intel Corp. and Microsoft Corp. The group has now handed over management of the standard to the USB Implementers Forum (USB-IF), the governing body for all USB specifications, said Jeff Ravencraft, chairman and president of the Wireless USB Promoter Group. ...Full Story

Wireless standards are 'a complete and utter mess'
By: Andrew Donoghue, May 26, 2005 -- The proliferation of competing wireless standards risks confusing technology users and preventing such technologies from gaining mass-market acceptance, according to the Ultrawideband (UWB) Forum Speaking as part of a panel of industry experts at the Wireless Connectivity World (WiCon) conference in London this week, Mike McCamon, executive director of the UWB Forum, said that the current state of the wireless world is analogous to the wired world in the 1990s — where competition between wired networking technologies such as Token Ring and Ethernet left users confused and uncertain. "It's 1990 in the wireless space at the moment. It is really a complete and utter mess but hopefully we can move towards having one complete standard and get past all this politics," said McCamon. ...Full Story

…And what’s the story here? The World Summit on the Internet Society (WSIS) is a multi-year process of the United Nations and the International Telecommunication Union that we have been covering for some time (see, for example, Who Should Govern the Internet? ). One of the more contentious aspects of WSIS has to do with governance issues – a topic upon which there are rather diverse opinions, as well as national strategies, as evidenced by the three articles selected below. But while the right hand of the UN was considering undercutting ICANN, the UN’s left hand (in this case, the World Information Protection Organization ( WIPO) another UN agency) was conducting business as usual with ICANN, as reported in the last item below.

Commission outlines EU negotiation principles for the World Summit on the Information Society in Tunis
Europa Press Release, June 6, 2005 -- Preparations for the second World Summit on the Information Society (WSIS) in Tunis (16-18 November 2005) have entered a crucial phase. This summit should reach an international consensus on two key unresolved issues from the first phase: Internet governance and financial mechanisms for bridging the digital divide between developed and developing countries. The European Commission has now adopted a communication outlining the EU’s priorities for the Tunis meeting. ...Full Story

A Battle for the Soul of the Internet
By: Elliot Noss
Special to ZDNet, June 06, 2005 -- With little fanfare, there is a battle going on for the soul of the Internet. The United Nations and the ITU (International Communications Union) are trying to wrest control of domain names, the DNS and IP addresses from ICANN (Internet Corporation for Assigned Names and Numbers). This battle manifests itself through the U.N.-created World Summit on Information Society (WSIS) and the ITU-lead Working Group on Internet Governance (WGIG)....Full Story

Time for Icann to let the world look after the net?
By: Jo Best, June 8, 2005 -- With the EU gearing up for the World Summit on the Information Society (WSIS), it seems top of the agenda is who runs the internet - and whether current internet guardian, US-based Icann, should get the chop in favour of a more international grouping. According to a communication from the European Union setting out its priorities for discussion for the WSIS meeting in Tunis in November, internet governance is one issue primed for resolution. "The question of internationalising the management of the internet's core resources... appears to be one of the main issues currently being discussed," the EU said. ...Full Story

WIPO urges tradename protection for new domains
By: John Blau
InfoWorld, June 3, 2005 -- The World Intellectual Property Organization (WIPO) is suggesting some safeguards to stop cybersquatters from grabbing trademark-protected names under new generic Internet domain names. WIPO is recommending a "uniform intellectual property (IP) protection mechanism" to prevent illegal domain name registrations in any new generic Top Level Domains (gTLDs), the organization said earlier this week in a statement. Its report, "New Generic Top Level Domains: Intellectual Property Considerations," was commissioned by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization responsible for overseeing the domain name system (DNS). ...Full Story

Standards and Society

Existing internet governance mechanisms should be founded on a more solid democratic, transparent and multilateral basis, with a stronger emphasis on the public policy interest of all governments [June 8, 2005]

EU Communique stating European priorities for the upcoming WSIS meeting in Tunis

Can you hear me now? While the fencing between the ITU and ICANN may evidence the less attractive aspects of WSIS, the following story highlights both the need and the hope that this initiative holds for the Third World. The goals are impressive in scope, and fortunately, the plans to meet them are as well, as described in a recent press release from the ITU.

ITU Launches New Development Initiative to Bridge the Digital Divide
ITU Press Release, Geneva, June 16, 2005 -- The International Telecommunication Union today launched a major new development drive designed to bring access to information and communication technologies (ICTs) to the estimated one billion people worldwide for whom making a simple telephone call remains out of reach. Called Connect the World, the initiative is designed to encourage new projects and partnerships to bridge the digital divide. By showcasing development efforts now underway and by identifying areas where needs are the most pressing, Connect the World will create a critical mass that will generate the momentum needed to connect all communities by 2015. At present, ITU estimates that around 800’000 villages — or 30% of all villages worldwide — are still without any kind of connection. Connect the World places strong emphasis on the importance of partnerships between the public and private sectors, UN agencies and civil society. ...Full Story

Yes, but is that a good thing or a bad thing? The U.S. government has recently been taking a proactive role in calling for the development of electronic standards for the reporting of patient information. This month, an advisory board to promote and facilitate the creation and adoption of standards that can enable “electronic health records” was announced. But what of privacy and security issues, some have asked? With the recent disastrous breaches of security in the world of consumer credit information, calls for stringent rules to protect privacy and security of personal health information will doubtless grow.

Advisory board planned for health IT
By: Bob Brewin, June 6, 2005 -- The Department of Health and Human Services plans to form an American Health Information Community (AHIC) advisory committee to speed development and adoption of a nationwide electronic health record (EHR) system, HHS Secretary Mike Leavitt announced today at a health information technology summit in New York City....The new advisory panel will also help develop standards-based health IT systems and recommend a nationwide architecture that uses the Internet to securely exchange health care information. Once those tasks are completed, AHIC work to replace the advisory body with a private-sector committee within five years....Full Story

Standards in Action

I have some bad news: RFID is based on RF and RF is a very beastly thing to control [June 15, 2005]
Jurgen Reinold, senior director of technology of Motorola's secure asset solutions unit, commenting on RFID interference issues

And speaking of privacy… As we have reported for some time there are people (with varying degrees of sanity -- see Dan Mullen, Andrew Jackson, and the Dark Side of the Internet ) that have privacy concerns relating to the collection of data by RFID tags. As the next several articles demonstrate, while vendors and retailers have great hopes for RFID deployment, others are not so sure. In some cases, limiting legislation has been the result

RFID to grow SAP, America, CEO says
By: Rhonda Ascierto
Computer Business Review Online, June 15, 2005 -- SAP, which makes software that links RFID data to business applications, has seen a 1,000% spike year-over-year in customers that deploy RFID, he said. And the company expects great things from the nascent technology during the next few years. "If you study what is on the minds of CEOs today, 88% of the CEOs surveyed said the No. 1 strategic imperative for the next 3 to 5 years is to grow their businesses. I look at RFID as an enabling technology," McDermott told ComputerWire. ...Full Story

Tag Team: Tracking the Patterns of Supermarket Shoppers
Wharton, May 26, 2005 -- To the untrained eye, the data presentation looks remarkably like an Etch-a-Sketch drawing, little more than a child's randomly drawn zigzag pattern on a favorite toy. But to Wharton marketing professor Peter S. Fader, those seemingly random lines represent a new dataset showing the paths taken by individual shoppers in an actual grocery store. The data -- charted for the first time by radio frequency identification (RFID) tags located on consumers' shopping carts -- has the potential to change the way retailers in general think about customers and their shopping patterns. [requires sign-in]...Full Story

California Set to Ban RFID IDs
AIM Global, May 24, 2005 -- The California State Senate recently passed SB 862 prohibiting the use of RFID in any state-issued "document" (including driver's licenses, ID cards, student ID cards, health insurance or benefits cards, professional licenses and library cards). The bill was supported by a number of prominent privacy and civil liberties groups including the American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF). The bill, introduced by Democratic State Senator Joe Simitian, was partly in response to the poorly-conceived RFID-enabled student ID cards issues in Sutter, California earlier this year. ...Full Story

Do you want video on demand with that? For years, cable and satellite were the only games in town for high-bandwidth services across the infamous “last mile”. Eventually, there was DSL, which delivered impressive data transmission speeds (at least in comparison to dial up modems) over the existing twisted-pair copper wire connections that virtually all US homes already possess. Now, the ITU is finishing work on a new standard for a next generation DSL – and this one is on steroids.

New ITU Standard Delivers 10x ADSL Speeds
ITU Press Release May 27, 2005 -- The International Telecommunication Union (ITU) today finalized work on new technical specifications that will allow telecoms operators around the world to offer a 'super' triple play of video, Internet and voice services at speeds up to ten times faster than standard ADSL. The ITU-T Recommendation for very-high-bit-rate digital subscriber line 2 (VDSL2) will allow operators to compete with cable and satellite providers by offering services such as high definition TV (HDTV), video-on-demand, videoconferencing, high speed Internet access and advanced voice services like VoIP, over a standard copper telephone cable. The new VDSL2 standard delivers up to 100 Mbps both up and downstream. ...Full Story

Intellectual Property Issues

This version still has something to offend almost every interest [June 8, 2005]

Patent lawyer Dennis Crouch, commenting on a bill to reform the U.S. patent system

Stop me, before I grant a software patent again: Lately, the only thing less popular than used car salesmen (and, of course, lawyers) seems to be the US patent system, especially when it comes to software. After many calls for reform, not just from individuals but even from government agencies as well, there is a bill in Congress to reform the patent system. Meanwhile, across the pond, the EU continues to see-saw back and forth over whether to permit software to be patented.

A fix for a broken patent system?
By: Declan McCullagh, June 8, 2005 -- Rep. Lamar Smith, who heads the House of Representatives committee responsible for drafting patent law, said his proposal would improve the overall quality of patents and target some of the legal practices that have irked high-tech companies. "The bill will eliminate legal gamesmanship from the current system that rewards lawsuit abuses over creativity," said Smith, a Texas Republican. The Business Software Alliance was quick to praise the bill, saying in a statement that it goes a long way toward "improving patent quality, making sure U.S. law is consistent with that of other major countries and addressing disruptions caused by excessive litigation." ...Full Story

Vote looms for European patent bill, June 17, 2005
-- Member states and the European Parliament are looking at a bill on patenting inventions that use software. The legislature's legal affairs committee is due to debate the bill Monday and vote Tuesday. "It's not like a budget, where you can cut the pie in two. Either you allow patenting or you do not. ...Full Story

See Also:

Open Source

We've seen a lot of companies opening their patents for open source, but we haven't seen a single aggregation point. Now I hope what Red Hat is doing with the Software Patent Commons becomes that single aggregation point [June 3, 2005]
Burton Group Senior Analyst Gary Hein, commenting on Red Hat's contribution of Fedora to a foundation

More open than thou: More and more major vendors are publicly taking the “openness oath” since IBM announced that it would not assert 500 of its patents against open source software implementations. This month’s crop of New True Believers includes Sun (which opened Solaris, its flagship operating system), and Nokia, which announced that the Linux Kernel would be immune from a Nokia patent attack. Meanwhile, leading Linux vendor Red Hat tried to wrap the openness flag more tightly around itself, accusing competitor Novell of being too proprietary, and also announcing that it would contribute its Fedora “hobbyist Linux” software to a non-profit foundation.

Sun begins open-source Solaris era
By: Stephen Shankland, June 14, 2005 -- The company on Tuesday posted more than 5 million lines of source code for the heart of the operating system--its kernel and networking code--at the OpenSolaris Web site. However, some source code components, such as installation and some administration tools, will arrive later. If all goes according to Sun's plan, Solaris won't just be a product of the roughly 1,000 programmers inside the Santa Clara, Calif.-based company. ...Full Story

Nokia Announces Patent Support To The Linux Kernel
Nokia Corporation, May 25, 2005 -- Nokia Corporation announced today that it allows all its patents to be used in the further development of the Linux Kernel. Nokia believes that open source software communities, like open standards, foster innovation and make an important contribution to the creation and rapid adaptation of technologies. Unlike other open standards, however, many open source software projects rely only on copyright licenses that often do not clarify patent issues. ...Full Story

Red Hat challenges Novell with open source directory
By: Neil McAllister
InfoWorld, June 6, 2005 -- With its acquisition of Suse Linux in 2003, Novell (Profile, Products, Articles) set itself up as the chief commercial competitor to Red Hat (Profile, Products, Articles) Linux for the enterprise Linux market. Last week Red Hat struck back, this time bringing the competition to Novell's home court. Novell has described itself as a "mixed source" company offering customers both open and proprietary products. Below a certain level of the application stack, Novell relies on open source technologies -- including Suse Linux and the JBoss (Profile, Products, Articles) application server -- backed with enterprise-class support. Higher up the stack the core of Novell's business remains its proprietary stack of networking and identity management software such as its eDirectory server. ...Full Story

Red Hat Frees Fedora, Calls For Commons
By: Sean Michael Kerner, June 3, 2005 -- Number one Linux distributor Red Hat (Quote, Chart) wrapped up its user conference in New Orleans this week by "freeing" its Fedora community Linux project and calling for the creation of a software patents commons. Mark Webbink, deputy general counsel at Red Hat, said Red Hat would create the Fedora Foundation in order to manage the Fedora Project. The intent is to move copyright ownership of contributed code and development work to the Foundation. ...Full Story

Story Updates

Sometimes you have to get to this point for things to be resolved [June 12, 2005]

Rambus General Counsel John Danforth, on Rambus’ decision to sue Samsung, its biggest customer

The sue me/sue you blues: Rambus took a page out of the George Harrison songbook this month when it added a new defendant to the long list of memory vendors it has already had taken to court: Samsung – its largest customer. Samsung promptly reciprocated by filing its own charges against Rambus. Only a short while ago, Rambus finally settled with Infineon, reversing the cash drain on its bottom line with the quarterly payments that Infineon would then be making (albeit at a rate that the market found represented more of a victory for Infineon, which received additional patent rights as part of the deal, than Rambus). Now, it ’s back to paying legal fees again. Or, as Harrison musically put it in the messy aftermath of the breakup of the Beatles, “When you serve me/ And I serve you/ Swing your partners, all get screwed.”

Samsung, Rambus trade lawsuits over chips June 12, 2005 -- The relationship between computer memory designer Rambus and one of its largest customers, Samsung Electronics, turned sour this week as the companies traded federal lawsuits over patents for computer memory chips. Weeks before a five-year patent licensing agreement between Rambus and Samsung was set to expire, Rambus on Monday said it terminated the license pact and filed a patent infringement suit against Samsung in California. A day later, Samsung sued Rambus in a Virginia court--the same court that in March handed Rambus a devastating blow in a separate case involving Germany's Infineon Technologies. Samsung asked the court to declare four Rambus patents "invalid and unenforceable." ...Full Story

Standards are Serious (right?)

When the government, in a Swiftian sense, declares that a pig can fly, we all enjoy a laugh. But pity the state employee who has to travel on the back of that pig! [June 1, 2005]

Steven Titch, IT&T News, reporting that the Minnesota legislature thinks that open source and open standards are the same thing

Department of “no comment:” Uh, you mean, they’re not the same thing?

Minnesota Botches IT Bill
By: Steven Titch
IT&T News, June 1, 2005 -- Caught up in the fervor over open-source software and believing open-source alternatives save money, Minnesota wants to encourage the use of royalty-free operating systems such as Linux. The law, proposed as HF 2222, would require state agencies that opt for proprietary software to submit a justification to the legislature…. First, the bill defines open-source software as carrying no royalties. That would not be troublesome by itself, but then the bill’s writers go on to confuse open source with open standards....Full Story


L10 Web Stats Reporter 3.15 LevelTen Hit Counter - Free PHP Web Analytics Script