The Free Standards Group: Squaring the Open Source/Open Standards Circle

Before there was Linux, before there was open source, there was (and still is) an operating system called Unix that was robust, stable and widely admired.  It was also available under license to anyone that wanted to use it, because it had been developed not by a computer company, but by personnel at AT&T's Bell Labs, which for a time was not very aware of its status as the incubator of a vital OS.

Mighty was the rise of that OS, and regrettably, so was the waning of its influence.  Happily, Unix was supplanted not only by Windows NT, but also by Linux, the open source offshoot of Unix.  But today, LInux is at risk of suffering a similar fate to that suffered by Unix.  That risk is the danger of splintering into multiple distributions, each of which is sufficiently dissimilar to the others that applications must be ported to each distribution - resulting in the "capture," or locking in, of end-users on "sub brands" of Linux.

The bad news is that the rapid proliferation of Linux distributions makes this a real possibility.  The good news is that it doesn't have to, because a layer of standards called the Linux Standard Base (LSB) has already been created, through an organization called the Free Standards Group (FSG), that allows ISVs to build to a single standard, and know that their applications will run across all compliant distributions.  And happily, all of the major distributions have agreed to comply with LSB 3.1, the most recent release.

I recently interviewed Jim Zemlin, the Executive Director of FSG, as well as Ian Murdock, the creator of Debian GNU/Linux, and the FSG's CTO and Chair of the LSB Working Group.  That interview appears in the May issue of the Consortium Standards Bulletin and covers a great deal of ground.  Some of the most interesting details, though, relate to how this open standards process interacts with, and serves, the open standards process that creates Linux itself.  Below, I've excerpted those parts of the interview, so that you can see how it's done.  [Disclosure:  I am on the Board of Directors of the FSG, and am also FSG's legal counsel.]

FSG — Linux Interface

1.  Which open source projects does FSG actively engage with?

Primarily the Linux distributions but also many of the constituent projects, particularly if those projects provide a platform that developers can target that could benefit from better integration with the broader Linux platform. Good examples here include the GNOME and KDE desktop environments. Each of these desktop environments is a platform in its own right, but a desktop isn't much use unless it is well integrated with the operating system underneath. Furthermore, ISVs targeting the Linux desktop ideally want to provide a single application that integrates well regardless of which environment happens to be in use.

[more] 

2.  How does FSG work with the Linux development team and the Linux process?

Actually, the LSB doesn’t specify the kernel–it only specifies the user level runtime, such as the core system libraries and compiler toolchain. Ironically, then, the _Linux_ Standard Base isn’t Linux specific at all–it would be entirely possible (and probably not altogether hard) for Solaris to be made LSB compliant. The LSB is entirely concerned with the application environment, and the kernel is usually pretty well hidden at the application level.

3.  Does the Linux community participate in FSG as well?

Yes, though most participation comes from engineers that work for the various companies that have an interest in Linux (Intel, IBM, Novell, HP, Ubuntu, etc.). However, there’s nothing particularly unusual about that. Most open source development these days is done by commercial interests, not by college students working out of their dorm rooms, which seems to be the common perception. (Of course, a lot of it starts there, but the best developers eventually figure out how to get paid to do it.) Whether you’re interacting with paid engineers or unpaid volunteers, though, a key to success in the open source community is getting the right people to buy in to what you’re doing and, ideally, getting them to participate. In general, the FSG mission resonates well with the open source community, and we have little difficulty getting that buy in and participation.

FSG — Linux Dynamics

1.  I’ve heard you describe the relationship of the open source and open standards processes in “upstream” and “downstream” terms.  Given that open source development is “real time” and ongoing-release, while standards have traditionally operated on a fixed basis, with nothing changing for a period of time, how do you make this work?

One way to understand this is look at the attributes of a successful open source project.   Success is relative to the number of developers and users of a particular set of code.   Apache is a good example.   As the community iterates code with similar functionality, for example a web server or a C compiler, the participants end up aligning themselves around one or in some cases two projects.   Smaller projects tend to die.  The ones that succeed then join the many other packages that are integrated into a platform such as Linux.  

The trick in standardizing then is to decide which snapshot in time — which interfaces from those packages at that point across all these packages – will guarantee interoperability.    By coordinating with these disparate upstream projects which versions of their code are likely to be broadly adopted downstream with the distro vendors, we provide a framework for those working both upstream and downstream.  In the case of the Linux distros, we help them cooperate in order to bring meaning to the term “Linux” in terms of the type of interoperability that is commonly expected on an operating system platform such as Windows or Mac OS.

This effort requires ongoing awareness of the spec development process itself both upstream and downstream, and a rapid feedback framework for all parties.  It also requires a coordinated parceling of the testing efforts to the appropriate sub-projects.   In other words, we are applying the bazaar method of open source coding to the development of standards.    That is how the community plays and we are a part of that community.

2.  At the process level, what other aspects of open source development are most problematic for standard setting, and vice versa?

Before answering that question, there’s one very important thing to understand about the FSG, and that’s that we don’t define standards in the same way that a traditional standards body defines standards. And that’s just the nature of the beast: The open source community is vast, complex, amorphous, and continually in motion. It’s also an integral part of what we do. So, the FSG by nature isn’t just a well-defined consortium of technology vendors that can define things unilaterally. It’s a well-defined consortium of vendors, certainly, but it’s also more than that, in that the vast, complex, amorphous, continually moving open source community needs to be represented at the table. In a lot of ways, what we’re doing at the FSG, namely bringing together open standards and open source, is unprecedented.

Clearly, our interactions with the open source community affect the processes we use to build the LSB and our other standards. We can’t just say “this is the way things are” the way we’d be able to do if our constituency was smaller and more self-contained. Instead, the way we define standards is far more about consensus building and observation–we watch what’s happening in the open
source community and industry and track what’s emerging as a “best practice” through natural market forces and competition.

One of the challenges of the LSB project, then, is understanding what   technologies have become or are becoming best practice, so that we can begin the process of incorporating those technologies. Another challenge is dealing with a moving target–after all, although the process of defining the standard is different, at the end of the day, the standard has to be every bit as precise as, say, a plumbing specification, or it won’t guarantee interoperability. Fortunately,
we already have a model to follow here, namely the Linux distributions, which perform the analogous task at the technology level by assembling the various open source components into a cohesive whole.

So, our task essentially boils down to tracking the technologies that ship in the majority of Linux distributions, and in building a layer of abstraction, a metaplatform of sorts, above the multiplicity of distributions so that application developers can target a single, generic notion of Linux rather than each distribution individually.

We also work to increase participation in the ongoing development of the  standard and to facilitate collaboration among the key stakeholders to more rapidly reach consensus around the best practices. The goal here is to capture in the LSB roadmap not just what exists in the current generation of the major distributions, but what’s coming in the next as well. After all, ISVs developing Linux applications today will often see the next generation as a primary target.

3.  What compromises (technically and process-wise) have the Linux and FSG communities had to made in order for the LSB to be practical while not impeding the work of either side?

The biggest challenge in what we do is probably no different than in any other standardization effort: Balancing the need for standards with the need for vendors to differentiate from each other. However, in the open source world, this tension is probably more pronounced due to the speed at which development occurs. I’d say the biggest compromise the open source community makes is understanding the importance of standards, backward compatibility, and all the sorts of things that tend not to be “fun” but which are vital to commercial acceptance–and being committed to doing what needs to be done. On the FSG side, the biggest compromise is being fairly hands off and leaving it to the marketplace to determine which of many alternatives is the best practice. The real key is making sure interoperability problems don’t crop up in the process, and the key to making sure that doesn’t happen is ensuring all the parties are in a constant dialogue to make sure the right balance is struck.  We see that as one of the roles of the FSG–providing a neutral forum for these kinds of conversations between the key stakeholders.

Looking to the Future

1.  Where else are organizations modeled on the FSG needed?

I wouldn’t frame it as where else is an FSG needed but rather where should the FSG go from here? At the end of the day, the LSB is a development platform standard. Some developers target the operating system in C or C++; others target middleware platforms like Java or LAMP; others are moving further up the stack to the web, where applications span site and even organizational boundaries (think of the various “mashups” that are happening around the so-called “Web 2.0” applications like Google Maps). Today, we cover the C/C++ level pretty well, but we need to move up the stack to cover the other development environments as well. The ultimate goal is to provide an open standard developers can target at any layer of the stack that’s independent of any single vendor.

So, the short answer is that we aspire to provide a complete open standards based platform (“metaplatform” is actually a more accurate way to describe it), and Linux is obviously just one part of such a platform. We need to move up the stack along with the developers to incorporate the higher level platforms like Java and LAMP. We need to extend the coverage of the operating system platform too, as we’ve done in LSB 3.1 with the addition of desktop functionality and are doing around printing, multimedia, accessibility, internationalization, and other areas in LSB 3.2. Even at the operating system level, there’s nothing inherently Linux specific about the LSB, so there’s nothing preventing us from encompassing other open platform operating systems, such as the BSDs or Solaris. In the end, it’s about all open platforms vs. closed platforms, where the closed platform du jour is Windows.

So, the real question is, how can the open metaplatform better compete against Windows? For one, Windows has .NET. Linux (and the other open platform operating systems) have Java, but it’s not as well integrated, and it’s not as well integrated because of the Java licensing. Sun has indicated they’re going to open source Java as soon as they address the compatibility concerns. We have a lot of experience in that area, so perhaps we can help. In the end, it all comes down to a strong brand and tying compatibility testing to the use of that brand, which is the approach we take with the LSB. There’s no reason a similar approach couldn’t work for Java, and the benefit of a integrated Java with the open metaplatform would be enormous.

Obviously, doing all of that is an enormous amount of work, undoubtedly an impossible task for any single organization to accomplish on its own. Then again, so is building a complete operating system, and a lot of little companies (the Linux distribution vendors) managed to do it by taking preexisting pieces and fitting them together into integrated products. And, as it turned out, the whole was a lot more valuable than the sum of its parts.

We take the same approach on a few levels. First of all, the LSB is an open process, so the best way to get something into the standard (assuming it’s a best practice, i.e., shipping in the major Linux distributions) is to step up and do the work (i.e., write the  conformance tests, etc.). In other words, we leverage the community the same way an open source software project would. Second, there are a lot of open standards efforts tackling pieces of the overall problem, and we seek to incorporate their work. In that sense, we’re essentially an integrator of standards, a hub of sorts, much as the Linux distributors are essentially integrators of technology. We don’t have to solve the total problem ourselves, just provide an open framework in which the relevant pieces can be fitted together.

2.  In the long term, should the standardization process and the open source process merge?  In other words, is there a benefit to there being an independent FSG, or in the future would it be better if the open source community incorporated this role into its own work?

Currently, there is no better way to balance the needs of a competitive distribution community with application interoperability. An independent standards provider bridges the gap between the open source community and the distributions implementing their software by allowing best practices of the latter to be standardized, thus making it easier for ISVs and end users to actually use the platform. The open source community does not want to concern itself with this standardization concern, nor should they. An independent consortium can drive consensus while being politically sensitive to the needs of its constituents.

3.  What is the single thing that open source advocates most need to “get” about standards, and need to work harder to accommodate?  Same question in reverse?

It would be helpful if those in some of the upstream projects participated more closely with our standards efforts. They are already doing this but there is always room for more participation. Closely tracking of the projects into the standard (or just through a database) will provide a great deal of service to ISVs and the distribution vendors. We plan on offering this service.

In the other direction, standards bodies need to recognize that open source development is fundamentally different than traditional software development. When working with the open source community, participation and buy-in are critical—you can’t just declare something to be so and expect the open source community to just follow suit—as is the ability to move quickly. For the FSG’s part, we understand all of this very well—after all, we grew out of the open source community—but it’s an observation other standards efforts would do well to keep in mind as open source and open standards increasingly intersect.

The entire interview can be read here.

For further blog entries on ODF, click here

subscribe to the free Consortium Standards Bulletin
(and remember to Buy Your Books at Biff’s)

Comments (7)

  1. “Linux is at risk of suffering a similar fate to that suffered by Unix.  That risk is the danger of splintering into multiple distributions, each of which is sufficiently dissimilar to the others that applications must be ported to each distribution – resulting in the “capture,” or locking in, of end-users on “sub brands” of Linux.”

    There are lots of assumptions in there. Firstly, I do not think that Unix, suffered by there being lots of unicies; it suffered because there were not enough versions. At the pivotal moment of the personal computer revolution, there was not one decent version of Unix easily available to users on their low-powered home/office computers. So as mainframes gave way to white boxes, Unix-like had to build the market again from scratch in a sea of DOS.

    There are at least two differences between the bad Unix days and today. Firstly, I can never be captured or locked in because I have all the source-code on my hard drive. I can make it run on whatever new platform that comes along, mobile phones, xboxes or whatever. Secondly, the Internet has made possible near-instant distribution of software from creator to user.

    I like the fact there are different distributions of GNU/Linux, different horses for different courses. A server is not the same as a mobile phone or a desktop. You could turn all distributions into Redhat or into Debian, but there is really no real advantage and lots of disadvantages.  How could Gentoo or Damn Small Linux fit into the Standard Base?

    The problems/opportunities for Linux adoption is not the core OS but the applications that run on it – you do not install Linux to have Linux, you install it to run applications.

    The key for not being locked-in is not the OS but the abililty to pick up your data and walk. So I am not interested in the compiler toolchain or some other low-level library but the document formats. If free/open source application A uses X format and application B uses Y format, then it can end up as bad as proprietary software.

    We are now getting to the point where there are a set of common file formats that will work across all posix platforms and even the proprietary operating system that comes installed on cheap new computers.

    We already have png for images and ogg for media files, now we also have the benefit of OpenDocument, so you can take a file from an Abiword user on Gnome, give it to a Koffice user on KDE to correct, who then hands it on to an OpenOffice user on that proprietary operating system, and all the way through you will not lose formatting, version information and so on.

    “…allows ISVs to build to a single standard, and know that their applications will run across all compliant distributions”

    Well just make it compile on GCC on one version of GNU/Linux and then it will work on the rest if you give out the source code. That is what after all distributions are for, taking the source code from all the projects and providing it in a useful form for the end-users.

    What we are talking about is medium to large proprietary software companies wanting to make one closed-source binary that will work on every Linux/BSD/Unix system.

    If you want to make such a blob then make it an Redhat RPM, as the rest of the GNU/Linux userbase probably will ignore your application by virtue of it being not open-source. When in Rome do as the Romans do. For an open-source operating system needs open-source applications, thats just how it is.

    GNU/Linux is not Windows, so trying to make it into Windows means that you have a poor copy of Windows. This applies both to software side and the business side. If you desire monopoly, i.e. a single binary and single distribution then use Windows.

    The Linux model is distributions competing to provide the best service, the Windows model is stagnant monopoly. Do not cross the streams.

    “the potential for Linux to fragment”   “splintering”

    Well it is fragmented, that is the whole point, and that is a good thing. I would instead use the term ‘modular’. Do not think wood, think lego. 

    We have a few kernels – Linux, BSD, Mach and so on, a toolkit – GCC, binutils, Glibc etc, and a huge number of libraries and applications. We also have distributions that put them together in different combinations – whether that be Debian giving out isos or Nokia compiling an OS for a mobile phone.

    “Linux” does not need to be a single operating system like Windows XP with clear boundaries of what is in and out. Instead the ‘stack’ will slowly take over everything, like a creeping plant or Giger’s Xenomorph. Mac OS X often uses GCC, Solaris is moving towards GCC. Firefox runs on them all.

    So, in conclusion, we need common open data formats, and let the distributions innovate and differentiate however they want to create and deliver that data.

    Thanks for reading. Have a nice day.

    • >Well just make it compile on GCC on one version of GNU/Linux and
      then it will work on the rest if you give out the source code.

      Wow – now who is making assumptions.   Sorry to pick holes,
      but that statement is plain bullshit, your failure to understand why
      negates the rest of your argument.

      Jon

      • Well yes I am simplifying this is not realy the place for a complete overview of toolkit methodologies, but source code that compiles properly on say Red Hat, can be made to work by the other distros then given out to users.

        If you could give an example rather than just swearing at me and calling names,  that would be good.

      • >>Well yes I am simplifying this is not realy the place for a complete
        overview of toolkit methodologies,
        >but source code that compiles
        properly on say Red Hat, can be made to work by the other distros then
        given out to users.
        Well, that is true – for now.

        Its only true however if the distros share a common base of libs, a
        backwards compatable compiler, a common API for kernel – in some cases
        a common interface to coreutils, /proc, networking etc etc etc.  I
        wonder if this would be true without LSB ?  Probably at the
        moment, but for how long …..

        The fact that application A can be built on distro B is only possible
        because they tend to feed from the same sources, the moment the distros
        dont agree on libs or a core package like X forks then it will all
        break down.

        Its not common for application A to build on distro B,C,D,E,F without
        changes – when it is possible then linux will be a true standard, a
        real standard not the knicker ellastic and gum of automake.  
        Take a package from Redhat and Build on Debian, ok it can be made to
        build – but now it doesnt run witout a complete reconfigure because the
        filesystem layout differs.

        Linux is a mess, Unix was a complete mess, thats progress – but only a litlle.

        Examples – I moved a complex web site between Redhat and Debian, over
        400 changes required ! Differences in paths, security model, avilable
        packages.

        Example2 –  I write embedded applications. Any tiny change can
        stop things working – minor differences in /proc have killed things
        just changing kernels,  gcc is bug ridden and not always
        backwardly compatable – change compiler by a .1 version things start to
        break.

        Example3 – Write a control panel application for network
        configuration.  Oh look – everything is now distro specific, no
        common glue between distros – any script as long as its mine.

        Example4 – Write a win32 GUI application, no wait – that one IS portable …. ho hum …..

        Jon

      • Example4 – Write a win32 GUI application, no wait – that one IS portable …. ho hum …..

        No it isn’t – Can’t run it on a Power PC.. or an Alpha.. or a Sparc.. or an Itanium..

    • Linux is not fragmented and will not fragment.

      Unix fragmented because it was closed, only the open Unixes are with us today, BSD and Linux.

      All the cloed Unixes are dead or stagnent. Solaris is trying to become open to survive – only time will tell if the have done this in time.

      The open nature of Linux and the GNU toolchain allows the different distros to package the Linux kernel and several different tool-chains and tool stacks. Good ideas in one distro easily propagate to other distros, so while there are differences and incompatabilities they are few and far between. Only closed source vendors unwilling to distribute source struggle with incompatabilities – usually because they write very poor code.

  2. <i>So, the real question is, how can the open metaplatform better compete against Windows?”</i>

    Well,… of course distributors could add (for example) graphics drivers that work “out of the box”.

    Linux haters claim all sorts of reasons for not doing this,… but are they right?

    The following quote is taken from: http://linux.coconia.net/politics/kmodsGPL.htm

    The above mentioned possibility of hiding the entire code of a program
    as an application library, is the reason that the GPL demands that any
    application that links to GPL’d shared libraries, must itself be GPL’d
    (a program is GPL’d, if it is licensed under the GPL).

    <b>It has been claimed that distributing a GPL’d kernel with
    binary closed source kernel modules is illegal. This claim has been
    advanced, to stop Linux distributors from shipping with Nvidia and ATI
    drivers that work “straight out of the box”. A recent example of this
    is the Kororaa controversy.</b>

    Those wishing to cripple Linux, make many unsubstantiated claims, some
    of which are wrong, in order to prevent Linux distributors shipping
    Nvidia and ATI drivers that work “out of the box”. Here is a sample:

    1) GPL and non-GPL components cannot be included together on a CD.
    2) Closed source kernel modules distributed with a GPL’d kernel clearly violates the GPL.
    3) Don’t include closed source kernel modules as the situation is murky. You might get sued.
    4) Closed source kernel modules link to the kernel in the same way that
    applications link to libraries, therefore you cannot include them with
    a GPL’d kernel.

     One, is wrong. Two, is not clear at all. Three, which sounds
    correct, is also wrong. Think about it, who is going to sue you? The
    Free Software Foundation? Not likely. Perhaps Microsoft might be
    interested in enforcing the GPL. Four, seems to have some merit, but is
    wrong……….

    For the full article, see http://linux.coconia.net/

Comments are closed.