Category: Open Access

Dasher: where do you want to write today?

Hannah Wallash put their slides about Dasher on the web (quite the same as these ones from her mentor). Dasher is an “information-efficient text-entry interface”.

What made me interested in Dasher is her introduction about the way we communicate with computers and how they help us to communicate with them. There are keyboards (even reduced ones), gesture alphabets, text entry prediction, etc. I am interested in the ways people can enter text on a touch-screen, without physical keyboard. Usually, people use a virtual keyboard (like in kiosks for tourists or in handheld devices). But they are apparently not the best solutions.

They come with an interesting way of entering text, where pulling and pushing elements on screen are used to form words (with the help of the computer that is “guessing” the words from the previous letters). It requires a lot of visual attention but this can be turned into a feature for people unable to communicate with hands (for physical keyboard and mouse ; one man even wrote his entire B.Sc. thesis with Dasher and his eyes!).

You can download Dasher for a wide range of operating systems and even try it in your web browser (Java required) (btw, it’s the first software I see that adopted the GNU GPL 3). After reading the short explanation, you’ll be able to easily write your own words, phrases and texts.

They are interested in the way people are interacting with the computer. They are using a language model to show the next letters. On the human side, I am wondering if this kind of tool has an influence on how the human brain works. Visual memory should be involved in physical keyboard (“where are the letters?”) but also here (same question but the location of letters is changing all the time). Here, letters are moving but one can learn that boxes are bigger if the next letter probability is bigger. How is the brain involved in such system? What is it learning exactly? Are there fast and slow learners in this task? It could be interesting to look at this …

"A closed mind about an open world"

Under this title, James Boyle, professor of law at Duke Law School (USA), wrote a comment article in the Financial Times [1]. For him, we all have a cognitive bias regarding intellectual property and the internet: the openness aversion. The openness aversion is the fact that we undervalue the importance and productive power of open systems, open networks and non-proprietary production. With three examples (internet, free software and Wikipedia), he somehow shows the evolution of mentalities towards theses “open things”. In 1991, scholars, businessmen and bureaucrats (and even us, maybe) would have scoffed at the internet as a business product. At that moment, control and ownership seemed the right way to go.

Now people evolved and we are a lot to love the internet, free software and Wikipedia. But the openness aversion is still there and some people are trying to restrict freedom (net neutrality, DMCA, DADVSI, DRM, TCPA/TPM, etc.).

[1] Boyles J., “A closed mind about an open world“. Financial Times, August 8th, 2006, p. 9.

P.S. By the way, I discovered Prof. Boyle and his articles on his website. I’ll now have plenty of interesting things to read (as if I didn’t already have enough article and books to read …).

Goodiff monitors (changes in legal documents of) service providers

GooDiff began its work a week ago and I didn’t see much news/blog posts about it. If I correctly understood, the idea behind GooDiff is to monitor changes in legal documents of (internet) service providers (like Google or Yahoo!). Indeed, service providers are often trying to change on the fly their legal documents, especially in some critical sections like privacy, copyright and alike. With GooDiff, consumers and users are now able to keep track of these changes. Thanks Alexandre!

P.S. Although the name and logo can mislead you (and misled me), the primary origin of the name “GooDiff” is not Google. The “Goo” part comes from the Gray goo (in SF, “goo” means a large mass of replicating nanomachines lacking large-scale structure, which may or may not actually appear like a drippy, shapeless mass). I am learning new words everyday!

Open Access publication message

scientific ratWe, scientists, create, provide and judge the science presented to journals. While we are not paid by the publishers, we pay to get access to this science.

Publishers who concentrate more and more journals within a few companies use their oligopoly to charge more and more and earn tremendous amounts of money. They use a snobbism about impact factors and the tyranny this exerts on the career of young scientists.

We can dilute this power in a simple way. Open access is the only answer. Whenever I have to choose one reference out of several, I shall from now on choose a reference to a paper that I and my readers can access freely on the Internet PubMed. If we all do that, we shall push the impact factor of those journals (printed or not) which do not grudge us.

If you agree with this message diffuse it.

(message originally from Pr Jacques E. Dumont, IRIBHM, ULB ; links are from myself)

A step further "simple" Open Access to scientific litterature

Combining a trend from the free software world and a reaction to increasing subscription costs, the last decade saw the emergence of the “Open Access” movement in the scientific litterature. Instead of transfering all your rights (and copyrights) to an editor that will sell your work to other scientists, you can choose to publish your work in Open Access journals. In this case, you retain your rights (and copyrights) on the article you wrote. Moreover, your work is freely available to other scientists (at least in electronic format) while still being of some quality since the reviewing process is still there. As an article writer, you only risk to be cited more often (since your article is freely available). As an article reader, you only risk to gain more knowledge (since more and more interesting articles are published with various Open Access publishers like BioMed Central, the Public Library of Science, etc.).

Now, I recently discovered Science Commons, a kind of extension of Creative Commons for sciences. Most Open Acces publishers chose one of the Creative Commons licences for the articles and additional material they offer. I’ve not yet read everything on their website. But this initiative seems to be a nice “enhancement” of giving access to the scientific litterature since they go a step further by focusing also on material licensing and the accessibility to raw data.

Of course, one of their projects deals with publishing scientific litterature. For me, this project can bring a bit more “independent” discussion in the field (by “indenpendent”, I mean that they don’t seem related to any publisher although they promote Open Access, of course). What’s more interesting is their two other projects because they can bring some fresh air and new ideas in their respective fields.

Science Commons second project deals with licencing material (hardware). It will explore standard licensing models to facilitate wider access to scientific materials. Without material, we cannot do science (it’s not Philosophy, working on ideas, concepts, etc.). Sometimes, material is so specific that it’s not sold by any big pharmacological / biotech companies but only produced on demand by another lab, in another corner of the earth. For the moment, nearly every material transfer between two labs is associated with a specific transfer agreement. Some standardisation will allow scientists to focus on their work rather than on legal and administrative annoyances while still giving rights and credits to the right group / people.

Finally, the third Science Commons project explores ways to assure broad access to scientific data. If you are lucky to publish your findings, maybe someone will find other effects or give you hints to find other relevant facts by simply looking at your raw data. This will allow a quickly evolving science and, for example, it will avoid the unneccessary use of many more laboratory animals, just to reproduce an experiment and trying to find other or more effects than those already published. Now if your experiments did not worked, for any reasons, publishing those unsuccessfull results will also prevent other people from performing the same unnecessary experiments. In biological sciences, it’s often difficult to publish a paper on, e.g., the fact that iron has no effect on a some metabolism. But I think it’s worth publishing data from this experiment since it can give clues on other substance effects on some biochemical pathways.

I know this post is rather biological sciences oriented (I am sorry, it’s my field ; and I didn’t even wrote about the NeuroCommons project, part the Science Commons Data project). I suggest you to have a look at the Sciences Commons website and to see by yourself how it can help your science field.

Quaero and the quest for alternatives

An article in the French newspaper Le Monde presents Quaero (to seek, in Latin) as the future “European Google”. Comments on this article are divided between supporters of this alternative and denigrors that predict another bureaucratic, bloated, ineffective project. My point here is not to argue pro or against this project. But I would like to dwell on American databases and search engines that serve the entire world.

When you need to look at some information on the internet (mainly, the web), I am sure you are using (American) tools like Google, Yahoo! or Altavista. In the life sciences domain, we have a wonderful database, PubMed, a service of the (American) National Library of Medicine that includes over 16 million citations of biomedical articles. When you are preparing a presentation or an experiment on a subject, it’s a great tool to do the bibliography.

I am insisting on the fact that these tools are coming from American companies or government agencies because I am wondering what we are going to do if, one day, for whatever reason, the U.S.A. will decide to stop providing these tools to the entire rest of the world. Or what if they simply decide to filter content delivered outside their country? Are you sure they are not already doing it? It’s the same problem with the satellite positioning system (GPS ; that’s why Europeans are launching the Galileo project), the root of internet domain name servers, the Microsoft Windows operating system, etc.

So, if the goal of Quaero is to achieve a relative independance, I agree with it (I have still some fears it will become a costly and ineffective tool). But I am wondering why isn’t there any free (at least as in “free beer”) alternative to PubMed. For the moment, I only see an alternative in the cooperation, interoperability between Open Access repositories with projects like the Open Archives Initiative, OpenDOAR, GNU Eprints and other software. But, until Open Access journals are widely used by scientists, it won’t be a PubMed replacement. And there is still no alternative for scientific litterature already published in Closed Access journals.

Protocole non propriétaire =? absence de contrôle =? attention à l’extrème-droite

Soit je suis parano, soit j’ai raison de peu apprécier le raccourci suivant : Protocole non propriétaire = absence de contrôle = attention à l’extrême-droite …

Résumé : le Vlaams Belang, parti politique d’extrême-droite flamand / belge, émet une émission sur les ondes AM, via le système DRM (Digital Radio Mondiale, une sorte d’équivalent au DAB ou RSN), à partir de l’étranger. Cette émission de 2 heures est apparemment “captable” (“écoutable”) en Belgique, avec le récepteur ad hoc. Le problème est que cette émission / radio / parti n’a pas d’autorisation pour émettre en Belgique.

Donc, je lis cela dans la presse. Jusque là, pas de problème. Mais l’article de La Libre Belgique (Le Vlaams Belang a sa radio. Illégale?) contient les paragraphes suivant :

Mais le DRM est un système numérique universel. C’est un système «non propriétaire» en ce sens qu’il n’a pas été développé par un industriel, mais grâce aux efforts conjugués des membres d’un consortium (BBC, Europe 1, RTL, RTRN Russie, Radio Vatican…).

Cette absence de contrôle permet donc au Belang d’émettre en toute liberté, au départ d’un émetteur apparemment situé à l’est de l’Europe, probablement en Russie.

Lisant cela sans a priori, je penserais qu’un système “non propriétaire”, non développé par un industriel (mais par plusieurs) donne une absence de contrôle, permet d’émettre tout et n’importe quoi, même les pires choses. Et en voici un exemple.

Poussons le bouchons un peu plus loin, en ces temps où la “sécurité” occulte bien souvent la liberté, et renversons la logique : il faudrait donc qu’il y ait un contrôle plus poussé des émissions (pour ne pas en arriver là) et donc, il faudrait plus de système “propriétaires” qui permettent ce contrôle.

Oui mais … C’est se tromper de cible ! Ce n’est pas parce que ces gens aux idées nauséabondes émettent en DRM que le DRM est mauvais. Ils auraient pu d’ailleurs émettre dans un autre système numérique ou même en analogique (radio pirate) que la technique n’aurait toujours rien à voir. D’ailleurs, si j’ai bien compris, sachant les poursuites dont ils sont passibles en Belgique, ils émettent depuis un autre pays et ils utilisent la technique adaptée pour cela (l’analogique ne permet apparemment pas une transmission sur de si longues distances).

Ensuite, il ne faut pas mettre sur le dos de la technique des problèmes de contrôle qui ne sont pas de son ressort. Le contrôle, il doit être effectué par l’Etat (dans ce cas-ci), quelque soit la technique employée, que la technique soit “non-propriétaire” ou “propriétaire”. Naviguant dans les logiciels libres et dans ses idées de liberté depuis quelques temps, il me semble que, justement, les techniques “non-propriétaires” permettent à toute personne d’utiliser la technique comme il l’entend, d’en connaître toutes les façettes et, le cas échéant, d’appliquer la loi. A mon humble avis, il serait toujours possible à une technique d’émission “propriétaire” d’inclure un dispositif permettant de contourner la loi ou d’en empêcher son application ; étant “propriétaire”, le contrôle de ce qui serait émis serait donc dans les mains d’une seule entreprise (ou groupe d’entreprises) qui, dans une logique économique pure, ferait jouer la loi du plus offrant (en petits sous-sous).

Ainsi, si le monde “non-propriétaire” doit effectuer beaucoup d’efforts pour montrer ses avantages de liberté, d’adaptabilité, de performance, …, il faudrait qu’il puisse combattre ce genre d’idée reçue.

P.S. : ce billet a été posté dans les journaux de LinuxFr