Home > Standards Blog

Advanced Search 

Welcome to ConsortiumInfo.org
Wednesday, October 22 2014 @ 09:23 PM CDT

The following comments are owned by whomever posted them. This site is not responsible for what they say.
South Africa Appeals OOXML Adoption
Authored by: Anonymous on Tuesday, June 10 2008 @ 12:07 AM CDT
Rick,

"Validation is objective proof."

Are you referring to the silly discussion between Alex and Rob Weir?
http://www.robweir.com/blog/2008/05/odf-validation-for-dummies.html
http://www.robweir.com/blog/2008/05/achieving-impossible.html

It might be difficult to get original validations (or documents) from 2005 when ODF got an ISO standard, I think. But there are plenty now. So maybe someone can go down and harvest old files or application versions from those days. But in those innocent days, no one thought their words would be contested, one character at a time. So they probably didn't notarise their document files.

Given the duration and extent of the cooperation between OO.o, IBM, and several FOSS projects (eg, Koffice), there were partial implementations available around that time and I understand that OO.o tried to make sure reference implementations were available during standards development.

But your use of "Vendor" to refer to OO.o, Koffice, and Abiword might be misleading. Claims from FOSS projects are generally more serious than those of commercial vendors. So dismissing claims by OO.o or KDE like those of Caldera/SCO or Enron is largely unwaranted.

Winter
[ Parent | # ]
South Africa Appeals OOXML Adoption
Authored by: Andy Updegrove on Tuesday, June 10 2008 @ 05:57 AM CDT
Rick,

I don't really accept your point.  As you know, the majority of software standards _never_ have a formal certification test, the reason largely being cost.  Test software costs a lot to develop, and the costs of creating them and offering them are rarely recoverable to the extent that they would deliver a commercial profit.  Hence, they are never created.  I've represented over 95 standards consortia and SDOs (announced or currently in creation), that together have developed many hundreds, and perhaps thousands, of standards.  My top of the head guess is that fewer than 20% of these standards were every supported by a software test suite, and of that 20%, perhaps 20% of those standards benefited from third party testing, as compared to self-testing.  Yet many of those standards are now cornerstones of their respective commercial niches.  This is the simple reality of the marketplace, as you know.

Market demands (e.g., consumer or business product?) and realities (small market or large?) dictate where tests exist.  If you are selling a WiFi home router or building an ATM machine, you can bet that the test will be rigorous, and there will also be investment in a brand campaign, because there is no tolerance for lack of robust interoperability without great commercial damage.  If you're talking about a limited market B2B product, then it's likely it won't, and the customer will expect that it will have some jiggering to do, or that vendors behind the scene will have already one that one on one. 

And, as you know, different standards deliver different degrees of interoperability, for many reasons, including level of complexity.  A really good standard or a standard that is really simple may get you really close - or in the case of a physical standard, all the way.  An average software standard or a really complex one may not.  That's why - as you know - "plugfests" are very popular, regularly scheduled, ongoing parts of  the standard scene, so that vendors can get together in a confidential setting and work the kinks out of their "compliant implementations" among themselves, to get the last yard that the standard or other factors couldn't deliver.

Rick, you have a habit of delivering statements - like the pro forma one - that you throw out, and then when confronted, you abandon without comment.  You then go on to toss out another similar statement, that you deliver without market context.  This doesn't really advance understanding at all, and leaves me feeling more like you drop in here to stir up the audience more than try and educate people.

Complex standards are a tough business, as you know, and they don't usually deliver clean results.  That's why you create the type of tests that you're talking about, so that you can  tell which implementations are of higher quality than others.  And that's why vendors create them themselves, as development tools, to test and improve their own products that they have already made as compliant as possible.  So finally, you are confusing people by conflating the thoroughness of a standard with the "compliance" of an implementation.  There are plenty of useful standards and totally compliant implementations that don't deliver plug and play interoperability, for many reasons, as noted above. 

What matters is how widely a standard gets implemented, and how hard vendors try to get to a high level of interoperability - and whether they succeed.  The important thing about ODF is that as soon as the standard came out, a meaningful and varied pool of enthusiastic vendors and open source projects, with a wide variety of models - software, Web-based and so on - jumped on board to adopt the standard, and work expanded within OASIS to extend the coverage of the standard still further, to provide for greater accessibility, and now to try and provide testing tools.  These are attributes of a healthy and growing ecosystem that will deliver choice, price competition and innovation.  Sniping about what "compliance" means and holding ODF to an unnaturally high standard for a specification at this stage of development serves no useful purpose.

  -  Andy
[ Parent | # ]