Category: Neuroscience

Open source animal behaviour monitoring

In the last issue of the Journal of Neuroscience Methods (impact factor: 1.5), 3 papers deal with animal behaviour monitoring and 2 of them introduce open source software.

Roseanna Ramazani and her colleagues “designed an automated system for the collection and analysis of locomotor behavior data, using the IEEE 1394 acquisition program dvgrab, the image toolkit ImageMagick and the programming language Perl” [1]. What is interesting is that they highlight the longevity and reliability of open source software, leaving behing the simplistic view “open source = free as in free beer”:

Some of these previous methods might have been able to meet our needs. Unfortunately, these previous programs are no longer available and all use proprietary software and/or hardware that no longer exists. The methods that we describe use only open source software tools and run interchangeably on different hardware platforms (we have used Mac OSX, Windows XP and Linux, although the data in this paper was all analyzed with a computer running Linux). Open source tools tend to have greater permanence than closed source since they are maintained by communities and they can be modified by the end user. It also is not limited to a single camera system or computer platform. It is readily available to the public, and can be modified by future users, provided that they have a general understanding of the programming language Perl.

In a second paper, Ganea and colleagues describe “a novel home cage activity counter for the recording of basal activity in rodents” [2]. It is not open source but they describe a system similar to Gemvid (my system) and they even don’t cite it! They submitted their paper after mine was published and they even cite Pasquali’s paper which describe a similar method and was published in the same journal as Gemvid. But maybe, if they would have cited my paper, their discussion about cost, limitation to light phase of circadian phase and special hardware would fizzle out 😉

In a third paper, Jonathan Peirce introduce a “Psychophysics software in Python” [3]. I must admit I didn’t know what is a “psychophysics software” until I read the paper and visited the website: http://www.psychopy.org.

So, it seems open source software slowly gain more and more attention in biomedical science … (when I started my Ph.D., nearly no one spoke about open source software, open access to scientific litterature neither, btw).

[1] R.B. Ramazani, H.R. Krishnan, S.E. Bergeson and N.S. Atkinson, “Computer automated movement detection for the analysis of behavior” Journal of Neuroscience Methods 162 (1-2): 171-179
[2] K. Ganea, C. Liebl, V. Sterlemann, M.B. Müller and M.V. Schmidt, “Pharmacological validation of a novel home cage activity counter in mice” Journal of Neuroscience Methods 162 (1-2): 180-186
[3] J.W. Peirce, “PsychoPy—Psychophysics software in Python” Journal of Neuroscience Methods 162 (1-2): 8-13

Noise level due to ventilation

Since a few days, I am back to my previous laboratory to collect some more samples. While looking after my rats, I measured the noise level with a dBmeter. The conditions are:

  • 8.30 am
  • doors are left open (except in the housing unit) as it is usually the case
  • five samples per room: one in each corner of the room (without moving any furniture) and one in the middle
  • measures taken at ear height
  • device: YF-20 (YFE)

noise level measures

The general mean is 61.7+/-3.3dB. But if we go into the details, a Kruskall-Wallis ANOVA shows most of the rooms have the same dB levels. But there is a significant statistical difference between the room where we perform experiments and the housing unit (p < 0.05). And there is a very significant statistical difference between the experiments room and the second office (p < 0.005). A significant statistical difference is also there between the two offices (p < 0.05; now guess in which one I am …).

Although very high compared to other labs, these levels are not considered as harmfull (same level as a busy restaurant – but you don’t stay 8hours a day in a restaurant). And anyway I am alway wearing ear plugs (3M 1100). On their box, it’s written they reduce noise level by 20-30dB.

Would you like to visit one of my lab?

It will be possible on this Saturday March 17th, 2007! For the EDAB Brain Awareness Week, one of my lab is organizing some conferences and you’ll also have the opportunity to visit the lab and see demonstrations on experiments we do and how we do. One of my mentors, Dr. P. Leprince, will tell (and show) you how we can identify proteins and identify their roles. Other workshops include microscopy, electrophysiology, behaviour. Conference topics include stem cells, drug addiction, injuries in the brain. You can have more info on the lab website (look for our activities, in French).

Unfortunately, I may not be there since I could have some experiment to do in the other lab, at the same time.

Symposium on Neuroproteomics in Gent

This friday, I attended the Symposium on Neuroproteomics organised at the University of Gent (B). Apart from Deborah Dumont‘s excellent talk, lectures were almost only focused on oxydative stress, neurological diseases and gel-free proteomics (like 2D-LC). One speaker even seemed to talk only to his computer or his presentation. So, it was not very interesting for me (finishing my thesis based on gel proteomics). The organisation was very “basic” and we even didn’t have any free pen + paper (fortunately, I took two pens and a notebook).

Dasher: where do you want to write today?

Hannah Wallash put their slides about Dasher on the web (quite the same as these ones from her mentor). Dasher is an “information-efficient text-entry interface”.

What made me interested in Dasher is her introduction about the way we communicate with computers and how they help us to communicate with them. There are keyboards (even reduced ones), gesture alphabets, text entry prediction, etc. I am interested in the ways people can enter text on a touch-screen, without physical keyboard. Usually, people use a virtual keyboard (like in kiosks for tourists or in handheld devices). But they are apparently not the best solutions.

They come with an interesting way of entering text, where pulling and pushing elements on screen are used to form words (with the help of the computer that is “guessing” the words from the previous letters). It requires a lot of visual attention but this can be turned into a feature for people unable to communicate with hands (for physical keyboard and mouse ; one man even wrote his entire B.Sc. thesis with Dasher and his eyes!).

You can download Dasher for a wide range of operating systems and even try it in your web browser (Java required) (btw, it’s the first software I see that adopted the GNU GPL 3). After reading the short explanation, you’ll be able to easily write your own words, phrases and texts.

They are interested in the way people are interacting with the computer. They are using a language model to show the next letters. On the human side, I am wondering if this kind of tool has an influence on how the human brain works. Visual memory should be involved in physical keyboard (“where are the letters?”) but also here (same question but the location of letters is changing all the time). Here, letters are moving but one can learn that boxes are bigger if the next letter probability is bigger. How is the brain involved in such system? What is it learning exactly? Are there fast and slow learners in this task? It could be interesting to look at this …

EURON Ph.D. days in Maastricht

These last 1.5 days, I was in Maastricht (NL) for the 10th Euron PhD days. Euron is the “European graduate school of neuroscience”. I presented a poster and did an 15 minutes oral presentation of my last results. It was a good meeting in its 1st meaning: I met interesting people. I also enjoyed listening to other Ph.D. students’presentations since it always gives you i) a glimpse at what other people (in other universities) are interested in (by other means that paper/digital articles) and ii) the impression that you are not the only one to have problems with your protocol, your animals, your proteins, … The location was great (Fort Sint Pieter), sun was there. The ULg team was very small (only 4 Ph.D. students and 2 senior scientists on a total of about 100 participants) but this gave an occasion to know other students better.

Btw, the new EURON website is using Joomla, a free CMS, as a backend (look at the favicon and the meta tags in the HTML code)

Recognition

A media looked for someone with experience in scientific mazes. They contacted Rudy D’Hooge, from the Laboratory of Neurochemistry & Behaviour, University of Antwerp (with Prof. De Deyn, he wrote an authoritative review on the subject). He gave my name and my lab as a reference for the Morris water maze (*). Maybe he gave other names and labs but …

Nearly 4 years ago, I took the train to visit his laboratory in order to see how we could install a water maze in our lab, what protocol we need to use, pittfalls to avoid, … We were nowhere, I learned from them (**) and now they cited us as a reference lab. After so much toil and trouble, it is heart warming. Thank you.

(*) By the way, the photo illustrating the Wikipedia article on the Morris water maze is mine 🙂
(**) To be complete, I also learned the maze from Prof. C. Smith and Prof. Steinbusch’s lab

Another scientific paper from the Poirrier-Falisse!

Finally, a second scientific paper is published by the Poirrier-Falisse (a first paper for me):

Poirrier JE., Poirrier L., Leprince P., Maquet P. “Gemvid, an open source, modular, automated activity recording system for rats using digital video“. Journal of Circadian Rhythms 2006, 4:10 (full text, doi)

It is still in a provisional PDF version but already available on the web and Open Access (of course)! Here is my BibTex entry. I will upload source code tonight on the project website.

Associative memory

Sunday, we went to a restaurant with my in-laws. They used to go there since a long time and they personally know the restaurant owner. Each time they eat there, it’s an opportunity to chat about the respective families ; so, when my wife is with her parents, the owner remembers what my wife is doing in life. But, since a few months, we went there three or four times alone (i.e. just my wife and me) and the owner never recognised my wife.

The funny thing (imho) is that my wife’s job is associated (in the owner’s mind) with her and her parents but not with her alone. When she’s alone, no particular association is made. But when she is with her parents, the owner associates the parents with the daughter and the daughter with her job. My wife’s job is stored somewhere in the owner’s brain. But the electrical/biochemical path to retrieve this information is not direct and only works with certain associations. The human brain is quite amazing …