Month: May 2007

When business got things right about Free/Open Source licenses

It’s always interesting to see “business” people getting things right about the Free/Open Source world. For example, the last month Boulder Open Coffee Club was dedicated to “open source issues that developers face”. The NVA blog contains a summary of the recommendations. Basically, it is: “know the licenses you are using and what you can(‘t) do with them“. And AskTheVC gives links to Lawrence Rosen’s book: “Open Source Licensing” (not read yet, maybe for a future post). They also link to a Boulder’s company (Openlogic) that helps you to maximize returns, minimize risks and accelerate innovation with Open Source (all keywords you should have in a business plan! ;-)). They also have some resources about Open Source for businesses.

We were sure about it but now other people are actually creating things with Free/Open Source software πŸ™‚

Post publishing editing

Brad Burnham recently wrote a post on the editorial process on the web, where the work happens after the publish button is pushed, not before it. It’s a report on a forum session and you can read some stakeholders opinions in the post. There are a serie of good points in the post and comments but, imho, there are also some questions left unanswered.

Basically, blog posts are edited after their publication: if I wrote something wrong, people will tend to post comments correcting what is wrong. That’s why Robert Scoble doesn’t agree with Andrew Keen when the latter argued that “the recent rise of user generated content is lowering the overall quality of programming on the web”. I think there is a “population effect”: the more visitors you have, the more edition and discussions you can have.

Now I would like to know if and how people edit their original posts after getting new input and/or corrections. I try to add a note at the bottom clearly stating the edition. What if I don’t do that? Content will vary over time. How can you trust such content? Since RSS feeds are not reflecting those changes, people using RSS are not aware of these changes. As stated in the post I was referring to, people need new tools to continuously assess the relevance of information they read on the internet (it was already the case with static content but this need becomes more urgent as dynamic content are more easily created and modified). Or bloggers must endorse “rules of conduct” not to edit posts but in this case, it’s up to the visitor to read comments and to summarize what should be the truth out of all this.

I think we are all used to the traditional media editing process: once it’s written, it should be right or corrected in the next publication (by the way the process is mutatis mutandis the same in mainstream media as well as in the scientific literature). Ditto for television where once it’s aired, it should be either correct or later corrected. The main difference is that blog posts and dynamic content in general are there at an instant i and stay during a certain duration (delta-i) in an unmodified form OR in a modified form. If you read the same popular page of the wikipedia (or any other wiki, e.g.) everyday, you’ll never read twice the same information (but main facts will still remain the same).

Finally, unlike Brad Burnham and people who wrote comments on his post, I don’t consider this “post publishing edition” as a problem for decision making (would it be at the business, personal or scientific level). Fact-checking and multiple sources of information are not yet obsolete.

Is VoIP reliable?

I am wondering if VoIP is reliable … Since a few weeks, the university is deploying VoIP phones in the whole campus. The good thing is that everything was apparently planned since a long time: cables were already there, just next to the regular IP cables. But since then, some problems are occurring … No connection to the “old” phone network, a whole morning without phone due to “a problem in the software controlling one of the infrastructure device”, … A few days ago, I even received an e-mail from the lab computer specialist telling all the scientists how to reset the lab firewall in case it blocks all the IP+Voice communications (for an unknown reason). I am not criticizing the deployment model of this particular case but I’m wondering how reliable is VoIP …

With “old” phones, you still have the opportunity to call emergency services even in case of total electricity failure (switches at the phone company may have to use a backup power system). Is is still the case with VoIP? After a small search on the web, the critical points in VoIP seem to be server/IP routes/routing devices redundancy, cooling and … keeping some traditional phones to the Public Switched Telephone Network πŸ™‚

Buttons cluttering

Image seen on a post on the Hyper Dog Blog:

Fortunately, the content was still longer than the right pile and bottom line of buttons. Can’t someone create a “social network of social networks” (and call it “Web 3.0” of course) to help those poor recognition-hungry-bloggers? πŸ˜‰

A third scientific paper for the Poirrier-Falisse!

Nandini published her second scientific paper in Journal of Proteome Research and it was just published “ahead of print” (i.e. in electronic version before the “official”, paper version). It’s:

Ruelle V., Falisse-Poirrier N., Elmoualij B., Zorzi D., Pierard O., Heinen E., Pauw ED. and Zorzi W.: “An Immuno-PF2D-MS/MS Proteomic Approach for Bacterial Antigenic Characterization: To Bacillus and Beyond” J Proteome Res., e-pub ahead of print.
PubMed ID: 17488104
DOI: 10.1021/pr060661g

Congratulations! πŸ™‚

Unfortunately, it is not Open Access (it mainly depends on the lab publication policy) so you need to pay to have access to the full-text (but I think Nandini will also self-archive this article somewhere). Here is NandiniÒ€ℒs BibTex entry (it will be updated for volume and pages asap).

A small post from Luxembourg

A small post from the Benelux Sleep Congress 2007 (where they left two unprotected wifi networks near and in the congress hall πŸ™‚ ). It’s mainly a medical congress but I had very interesting discussions with, a.o., Prof. Peter Meerlo and Dr. Michel Cramer Bornemann, mainly about Gemvid and the proteomic aspect of my Ph.D. Let’s see what can I do with all these contacts …

Otherwise, the Domaine Thermal of Mondorf-les-Bains is a beautiful place (ok, it’s not as natural as landscapes aroung the highway and national roads going to Luxembourg). Unfortunately, I left my camera at home (and it’s really a stupid decision taken in this morning rush).

Another reason why free software matters

This morning, I read Tim Anderson‘s “Why Microsoft abandoned Visual Basic 6.0 in favour of Visual Basic .NET“. While reading his article, I only had one idea in mind: this is another example of the importance of free and open source software. If you are not a programmer, you don’t need to read the remainder of this post; software users have many other reasons to prefer free software over closed-source software (but it’s not the subject of this post).

Basically, Visual Basic was abandoned because it was based on an old library, it lacked object-orientation and it was limiting programmers creativity compared to other languages available at that time (C++, Delphi, etc.). Per se, shortcomings in programming languages are not unusual and, considering the origin and goal of (Visual) Basic, people could have forecast this giving up. What is interesting to see is that Microsoft took the somehow brave decision to break the compatibility with previous Visual Basic. This left recreational as well as professional programmers as well as companies in a strange zone where they need the old Visual Basic for sometimes critical software but without any support anymore for the language and its extensions. Of course, Tim Anderson added that you can still write these extensions (the infamous .DLLs) in other languages and link them with Visual Basic code. And there is informal support from The Community

Things could have been better if the language was standardized and, even better, if the language specifications and evolution were open to the community.

First, standardization … C, C++ and Javascript (for example) are standardized since a long, long time (longer than Visual Basic even exists) and they are still largely used. Depending on sources, their usage may decrease or be stable but, along with Java, C and C++ are the only languages to be used by more than 10-15% of projects. Standardization means specifications are available and people and companies can build their own compiler (C, C++), implement their own interpreting machine (Javascript in browsers) and create their own libraries around these languages. There is no fear your favorite IDE/compiler/… will be abandoned because if it happens, you will always be able to use other tools and continue to use the same language.
But standardization isn’t the only keyword. C# and a part of .Net are also standardized but there is a “grey zone” where you don’t know what is patented (yes, another bad thing about software patents) and by whom. This uncertainty may hamper the development of alternatives to the “official” compiler/IDE/… (see discussions about the Mono project on the internet).

These problems about patents and who owns what leads to another important aspect of a programming language: how it is developed. If one company holds everything (like Visual Basic, C# and Java), it is very difficult to suggest improvements, submit them or submit bug reports, etc. And once the company decides it has no more interests in this language, all efforts and developments are slowly becoming useless, as we see with Visual Basic now. Languages like Perl and Python are openly developed by the community. Usually efforts are catalysed by a steering committee (or a structure like that). But in case this committee unilaterally decide to halt the development of a language, source code and other discussion can still be used by other people.

So, if you can choose and even if some languages are are in the hype for the moment, choose a free/open language and your efforts to study it, to find tricks and details will never be nullified by a third party decision.

Re-examinated patents are still valid?

In “Patenting the obvious?” (pdf), you’ll read about people fighting against a patent on methods for making embryonic stem cells from primates. I won’t go into the details about the patent in itself (although I think that there shouldn’t be any patent based on or containing living “things” or part of it). I just want to share my surprise when I read this (emphasis is mine):

In its 2 April statement, the patent office said that it accepted these arguments, and intended to revoke the patents. WARF has until June to respond to the decision, and if it is unhappy with the outcome, it can then initiate an appeal. The patents will be treated as valid until the re-examination process is complete Ò€” that is, until WARF’s response and the possible appeal have concluded. That could take years.

In other words, you can fill as many patents as you want, even if they are stupid, even if you don’t disclose prior art (making your patent irrelevant): even if your patent is challenged, it will be valid for years! How is it possible?

I’m not a lawyer but I looked for details about the classical appeal procedure … Let’s say 2 people enter the court, one as a defendant and the other as a plaintiff. Both are supposed innocent. After the trial, if the settlement does not please one of the parties, one can fill a notice of appeal and the whole thing will be heard by the next higher court with jurisdiction over the matter. During the appeal, both parties remain presumed innocent! It is like nothing happened and everything has to start again. Apparently, in the US Patent and Trademark Office, it’s not the case: once they took a decision, it cannot be changed until the end of all appeal. I think this is also wrong.