The following article was co-written with Michele Herman of JustTech
Open source software and open standards have many similarities but the legal frameworks under which each are created have real and important differences. Nonetheless there is an increasing desire to …
In its simplest form, FOSS development requires almost no traditional economic, physical or management support. All that is needed is a place to host code in a manner that allows multiple developers to collaborate on its further development. As FOSS has become more commercially valuable and widely incorporated into vendor and customer strategic plans, however, additional layers of services and structures have evolved to allow FOSS development to become more efficient and robust and the user experience even more productive. These include training, a growing certification testing network, a variety of tools to assist in legal compliance matters, and a network of hosting entities providing a wide range of supporting services and frameworks.
It would not be an exaggeration to say that the magic of open source software (OSS) is based as much on legal innovation as it is on collaboration. Indeed, the essential innovation that launched free and open source software was …
Everybody uses open source software (OSS) today. Millions of people contribute to the code itself. Indeed, a substantial percentage of the users and creators of OSS today are young enough to have never known a world that didn't rely on OSS. In other words, it's very easy to take this remarkable product of open collaboration for granted.
The vast majority of free and open source (FOSS) projects today operate on a license in/license out basis. In other words, each contributor to a code base continues to own her code while committing to provide a license to anyone that wants to download that code. Of course, no developer ever actually signs a downstream license. Instead, all contributors to a given project agree on the OSI (Open Source Initiative) approved license they want to use, and those terms stand as an open promise to all downstream users.
But is that really the best way to operate? What about the minority of projects that require contributors to assign ownership of their code to the project? They clearly think assignment is a better way to go. Are they right?
Free and open source software (FOSS) development has for many years enjoyed an increasingly positive public image. Particularly in the last several years, it’s become recognized as the foundation upon which most of the modern computing world rests. FOSS proponents include many governments, too, including many in Europe and the European Commission itself.
That’s all good and quite appropriate, but it’s worth keeping in mind that FOSS involves the conscious agreement of head to head competitors to work towards a common result – something that would otherwise normally be a red flag to antitrust regulators in the US, competition authorities in Europe, and to many of their peers throughout the world. To date, those regulators do not seem to have expressed any concerns over FOSS development generally. But that can change.
Everything changes over time, from the constitutions of nations to political theories. Should the Open Source Software Definition be any different?
Earlier this week the Board of Directors of the Open Source Initiative issued an Affirmation of the Open Source Definition, inviting others to endorse the same position. The stated purpose of the release was to underline the importance of maintaining the open source software (OSS) definition in response to what the directors see as efforts to “undermine the integrity of open source.” Certainly, that definition has stood the test of time, and OSI has ably served as the faithful custodian of the definition of what can and cannot be referred to as OSS.
That said, while well-intentioned, the statement goes too far. It also suggests that the directors would do well to reflect on what their true role as custodians of the OSS definition should be.
By Ashley Lipman
Many people have heard of Kubernetes, but don’t know when or where to use it or even what it’s functionality is. Docker users may be more familiar with the program, but still unsure how to make that transition into using Kubernetes.
In this article, we’ll take a beginner’s approach to what Kubernetes is and how to start using it. This information will give you a high-level overview of the program and highlight some key considerations.
The wire services lit up yesterday with news that six of the largest tech companies in the world had issued a statement in support of interoperability in healthcare at a developer conference. It’s a righteous goal, to be sure. In an interoperable healthcare world, anyone’s entire, life-long health record could be accessed anywhere, anytime, by anyone who was giving you care, from your primary physician to an emergency responder. Such a virtuous goal, in fact, that everyone, including the US government, has been trying to achieve it – without success – for over a decade. Will yesterday’s news bring us any closer to that goal?
First, the good news: last week, Google, Microsoft, Twitter and Facebook announced the Data Transfer Project, inviting other data custodians to join as well. DTP is an initiative that will create the open source software necessary to allow your personal information, pictures, email, etc. to be transferred directly from one vendor’s platform to another, and in encrypted form at that. This would be a dramatic improvement from the current situation where, at best, a user can download data from one platform and then try and figure out how to upload it to another, assuming that’s possible at all.
So what’s the bad news, and what does a hammer have to do with it?