Month: October 2006

Search for images by sketching

On his blog, Laurent wanted to know who is this guy. I though it was an interesting starting point to see how good is Retrievr, “an experimental service which lets you search and explore in a selection of Flickr images by drawing a rough sketch”.

Although my drawing skills really needs to be improved (and their drawing tools more refined – always blame the others for your weaknesses 😉 ), a first sketch gives some interesting results (see screenshot below): 7 retrieved photos (44%) show a b/w human face in “frontal view” (if you count the dog, it’s even 8 correct images).

Click to enlarge

If I just give the photo URL, results are not so good (see screenshot below). I am nearly 100% sure that it’s because it’s a greyscale photo scanned as a color image.

Click to enlarge

When I upload the image on my harddisk, convert it to a greyscale image and upload it on Retrievr, now it gives more expected results: 10 images show a person with his/her hair (63%), either from the front or from the back.

Click to enlarge

So this photo is not among the “most interesting” ones on Flickr (it is even probably not on Flickr). I suppose that if one applies Retrievr on a larger subset of photos, we’ll have a higher probability to find it (but it will also increase noise, i.e. the number similar photos). If you like playing with Flickr, other intersting Flickr mashups can be found here.

Automated Pubmed reference to BibTeX

In biology, we often need to use PubMed, a biomedical articles search engine for citations from MEDLINE and other life science journals.

In the MS-Windows world, you have nice, proprietary tools (like Reference Manager or Endnote) that retrieves citations from PubMed, store them in a database and allow you to use them in proprietary word processing software (in fact, in MS-Word only since nor Wordperfect nor OpenOffice.org are supported). If you are using BibTeX (for LaTeX) as your citations repository, there isn’t a lot of tools. The best one, imho, is JabRef, a free reference manager written in Java (for me, the only “problem” is that it adds custom, non-BibTeX tags). Or you can edit the BibTeX file by yourself with any text editor.

The problem with manual edition is that it is prone to error (even when copying/pasting from the web). Since Python programming is my hobby horse for the moment, there are two solutions to this problem:

  1. Use Biopython to get a reference from PubMed but are you ready to have a huge module dependency just to use 1 function?
  2. Write your own Python script, using a PubMed URL to download your reference and a little bit of XML parsing to extract the relevant info (one can use the ESearch and EFetch tools but my lazy nature tells me to simply use the URL).

Obviously, I chose to write my own Python script. Each reference from this PubMed XML format example (full DTDs) should be like this:

@article{poirrier06,
  author = {Poirrier, J.E. and Poirrier, L. and Leprince, P. and
Maquet, P.},
  title = {Gemvid, an open source, modular, automated activity
recording system for rats using digital video},
  year = 2006,
  journal = {Journal of circadian rhythms},
  volume = 4,
  pages = {10},
  pmid = 16934136,
  doi = {10.1186/1740-3391-4-10}
}

The script is here (4kb). First, use PubMed to check the reference you want, then take its PubMed ID (PMID) and launch the program, giving your BibTeX file in a pipe, for example:

./pyP2B.py 16934136 >> myrefs.bib

If you like, you can edit the script to change the tab size (here = 2).

How does it work?

  1. With PubMed, I do not use the correct tool but a HTTP query. It is much more simple and easier. The script asks for the PMID citations. Since it gets a HTTP answer, I need to parse this answer to replace entities (like , etc.) and obtain a valid XML file.
  2. Once I got the XML file and after some checking, I use XPaths from LXML (for me, XPaths are quick and dirty compared to write a DOM/SAX structure but it works!)
  3. Then the script simply prints the result to the standard output (even if it’s an error ; improvement : print on the error output). You simply need to get this output into your BibTeX file with the correct pipe.

Edit on October 23rd: this script has errors when dealing with non-ascii chars like “ö” in Angelika Görg. I won’t fix it for the moment.

Diwali 2006 @ ISAL

On Saturday, after the Kolam ritual, we went for Diwali, the Hindu Festival of Lights, organized by the ISAL. It was very nice to meet people we already met on previous ISAL “functions” and to talk with them. And I think that ISAL is attracting more and more people, both of Indian origin (working in Belgium, for example) and of non-Indian origin: this time, people from Belgium, China, Poland, Russia, Spain, etc. were there. As usual, I took some photos.

Kolam ritual @ Bozar

On Saturday, we went to see a Kolam ritual at Bozar. Kolam is the designs the Pulluvans are drawing on the floor for a lot of occasions, using multicoloured sands, rice and spices. Here, it was supposed to be a ritual for a family (“supposed” only because it was a demonstration for the public and no family was specifically involved). In this ritual, two women in a trance erase the drawings and answer questions the family is asking. The whole ceremony is linked to snakes that are supposed to have been in Kerala before men and that should be pleased in order to peacefully live together.

I took some photos but, unfortunately my camera isn’t good without flash (*). This ritual was part of the India Festival at Bozar. If you want some suggestions of things to see (dance, theatre, music, litterature, exhibitions, etc.), my father-in-law gave some here (in French).

(*) I found that people taking photos with flash during this religious ritual (even after the commentator specifically asked not doing so) are very coarse and insulting for the artists.

Symposium on Neuroproteomics in Gent

This friday, I attended the Symposium on Neuroproteomics organised at the University of Gent (B). Apart from Deborah Dumont‘s excellent talk, lectures were almost only focused on oxydative stress, neurological diseases and gel-free proteomics (like 2D-LC). One speaker even seemed to talk only to his computer or his presentation. So, it was not very interesting for me (finishing my thesis based on gel proteomics). The organisation was very “basic” and we even didn’t have any free pen + paper (fortunately, I took two pens and a notebook).

Dasher: where do you want to write today?

Hannah Wallash put their slides about Dasher on the web (quite the same as these ones from her mentor). Dasher is an “information-efficient text-entry interface”.

What made me interested in Dasher is her introduction about the way we communicate with computers and how they help us to communicate with them. There are keyboards (even reduced ones), gesture alphabets, text entry prediction, etc. I am interested in the ways people can enter text on a touch-screen, without physical keyboard. Usually, people use a virtual keyboard (like in kiosks for tourists or in handheld devices). But they are apparently not the best solutions.

They come with an interesting way of entering text, where pulling and pushing elements on screen are used to form words (with the help of the computer that is “guessing” the words from the previous letters). It requires a lot of visual attention but this can be turned into a feature for people unable to communicate with hands (for physical keyboard and mouse ; one man even wrote his entire B.Sc. thesis with Dasher and his eyes!).

You can download Dasher for a wide range of operating systems and even try it in your web browser (Java required) (btw, it’s the first software I see that adopted the GNU GPL 3). After reading the short explanation, you’ll be able to easily write your own words, phrases and texts.

They are interested in the way people are interacting with the computer. They are using a language model to show the next letters. On the human side, I am wondering if this kind of tool has an influence on how the human brain works. Visual memory should be involved in physical keyboard (“where are the letters?”) but also here (same question but the location of letters is changing all the time). Here, letters are moving but one can learn that boxes are bigger if the next letter probability is bigger. How is the brain involved in such system? What is it learning exactly? Are there fast and slow learners in this task? It could be interesting to look at this …