Category: Computers

Privacy -vs- information conservation time

In my opinion privacy issues are a by-product of information conservation times reaching infinite.

For centuries and more humans were used to their own type of memory. When information reaches the brain, it is stored in short-term memory. When relevant and/or repeated, it is gradually consolidated into long-term memory (this is roughly the process).

Schematic memory consolidation process

The invention of oral transmission of knowledge, written transmission (incl. Gutenberg) and, to a certain extend, internet, all these successively increased the duration of retention of information shared with others. The switch from oral to written transmission of knowledge also sped up the dissemination of information as well as its fixed, un-(or less-) interpreted nature.

Duration of information over time

With the internet (“1.0” in order to put some buzzword) the duration of information is also extended but somehow limited ; it was merely a copy of printing (except speed of transmission). Take this blog, for instance: information stored here will stay as long as I maintain or keep the engine alive. The day I decide to delete it, information is gone. And the goal of internet was to be able to reach information where it is issued, even if there are troubles in communication pipes.

However on top of this internet came a serie of tools like search engines (“Google”) and centralized social networks (“Facebook”). Now this information is copied, duplicated, reproduced, either because of the digital nature of the medium that allows that with ease. But also because these services deliberately concentrate the information otherwise spread. Google concentrate (part of) the information in its own datacenters in order to extract other types of information and serves searches faster. Facebook (and other centralized social networks) asks users to voluntarily keep their (private) information in their own data repository. And apparently the NSA is also building its own database about us at its premises.

In my opinion, whenever we were sharing information before, privacy issues were already there (what do you share? to whom? in which context? …). But the duration of information is now becoming an issue.

Is it so difficult to maintain a free RSS reader?

A few months ago Google decided to retire its Google Reader (it stopped working on July 1st, 2013). As it was simple, effective and good-looking, a lot of people complained about this demise. A few days ago The Old Reader, one of the most successful replacement for Google Reader, also announced it will close its gates, only to keep early registered users. And today Feedly, another successful alternative, announced it is introducing a pro version at 5.00 USD per month.

One of the reasons often evoked is the difficulty for these relatively small projects (before Google Reader demise) to handle the many users who migrated to their platform. Difficulties in terms of hardware resources but also human resources, finances, etc.

So, to answer my own question, yes, it looks like it’s difficult to maintain a free RSS reader with an extensive number of users. And free software alternatives like Tiny Tiny RSS, pyAggr3g470r or Owncloud can be difficult for users to install (and especially maintain – same type of difficulties: necessity to have a host and technical capabilities, time, money (even if at a different scale), …).

Two thoughts on this. Fist people are used to free products on the internet (count myself among them). And we take for granted that services on the web are and will remain free. RSS and its associated readers were a great inventions to keep track of information coming from various sources. However with the explosion of the number of these sources is RSS still a valid tool? One solution is to restrict ourselves to some, carefully selected sources of information. The other is to imitate statistics: summary statistics exist for raw data, datamining should become as easy to use for raw information (but I don’t think datamining is as easy as summary statistics).

Which leads me to my second thought: aren’t this just signs of the end of RSS as we know it? People thought of it because of a giant web service provider removed its “support” for RSS. What if it is just the end of RSS because it is not adapted anymore to “modern” use?

Let me try a comparison. E-mail is an older system than RSS. It is however still there. It serves another purpose: one-to-one or one-to-few communication. But since its origin e-mail clients tried to innovate by adding features, among which is automated classification of e-mail. Spam filters exist since a long time. Rules can be defined in most e-mail clients. GMail (again from Google) is now classifying your own e-mail with “Priority”, “Social” etc. These tools help us to de-clutter our Inbox and keep only relevant e-mails in front of us when we need them. I think RSS would benefit from similar de-clutter/summarizing tools. We just need to find / invent them.

How to write data from R to Excel (even if you don’t have Excel)

Following my previous posts on how to read/write Excel files from Matlab here is the way I use to read/write Excel files from R. Again it seems the Apache POI java library made developers’life easy. I use here the simple-yet-powerful xlsx package (documentation here in PDF; project website).

Here you don’t need to install any additional files, installing the xlsx package from R does all the dirty work that for you. Then, reading an Excel file is very easy:

libary(xlsx)
inData <- read.xlsx2("input.xls", sheetName="Contactmatrix", header=FALSE)

I usually use read.xlsx2 instead of read.xlsx. It is said to be faster with large matrices and I had the opportunity to experience it – so I stick with this. You can read xls, xlsx and xlsm files without issue (well, with the simple formatting I usually use).

Writing to an Excel file is also very easy:

write.xlsx2(outData, "output.xls", sheetName="Random2", col.names=FALSE, row.names=FALSE)

Easy, isn’t it?

Any free solution for the demise of Google Reader?

Last week Google announced it will shut down its Reader service. It is a web-based RSS reader. It therefore allows to be kept updated of news from around the net in a central location. I liked the service for 3 reasons (on top of the fact it’s free, 0$, to use):

  1. It’s web-based, accessible from anywhere/everywhere with a simple browser;
  2. It’s text-based, you can quickly scan headlines and use the powerful search function from Google;
  3. It’s backed by an API so you can use it via different apps on different platforms and they all stay synchronised (the web/mobile version of Reader is not as efficient as the web/desktop version; hence the proliferation of apps using Reader as a backbone).

Of course it frustrated a lot of people, from scientists to consultants … to name a few only. People are looking for alternative (you can do a search on Google while the Search service is still working). Feedly is cited very often as the next best alternative. However its nice, graphical interface conflicts with my second reason to like Google Reader: it’s text-based. The Old Reader looks also interesting, it is text-based but no apps on different platforms yet. But both are also proprietary and can be turned off (or changed to a pay-for-use model) at any moment 😦

An interesting solution could be an Evernote RSS reader. Evernote has already a portfolio of application ranging from a note-taking software, screenshots, drawing, food, … They have a synchronisation process in place. Why not a RSS reader then?

Back to the main track … Fortunately – in a way – Google Takeout allows you to retrieve all your data from Reader, along with an OPML file containing all your subscriptions. You can feed this file in another reader and you can go forward. Starred items are also retrieved (but which reader can use them?). And if you are interested The Guardian has an interesting article about the average duration of Google free services (1459 days, see below) and other nice facts. I guess they will keep Search alive 😉

130325-Google_Keep_Guardian

But what can be done for free (as in free speech)? One of the solution is Owncloud (AGPL) and they recently released a RSS reader add-on. Another solution could be pyAggr3g470r, a news aggregator written in Python. And I was wondering why there isn’t just a simple API that would allow any kind of application to connect, update and display RSS feed. Something like the NewsCredNews API but free, simpler to use than Owncloud and with apps/website interface for mobile devices. And a poney with that, please.

Do you have any other solution?

Map of GAVI eligible countries in R

I was trying to reproduce the map of the GAVI Alliance eligible countries (btw I was surprised India is eligible – but that’s the beauty of relying on numbers only and not assumptions) in R. This is the original map (there are 57 countries eligible):

map_GAVI-eligible_countries_700x315_700

I started to use the R package rworldmap because it seemed the most appropriate for this task. Everything went fine. Most of the time was spent converting the list of countries from plain English to plain “ISO3” code as required (ISO3 is in fact ISO 3166-1 alpha-3). I took my source from Wikipedia.

Well, that was until joinCountryData2Map gave me this reply:

54 codes from your data successfully matched countries in the map
3 codes from your data failed to match with a country code in the map
189 codes from the map weren’t represented in your data

I should have better simply read the documentation: there is another small command that needs not to be overlooked, rwmGetISO3. What are the three codes that failed to match?

Although you can compare visually the map produced with the map above, R (and rworldmap) can indirectly give you the culprits:

tC2 = matrix(c("Afghanistan", "Bangladesh", "Benin", "Burkina Faso", "Burundi", "Cambodia", "Cameroon", "Central African Republic", "Chad", "Comoros", "Congo, Dem Republic of", "Côte d'Ivoire", "Djibouti", "Eritrea", "Ethiopia", "Gambia", "Ghana", "Guinea", "Guinea Bissau", "Haiti", "India", "Kenya", "Korea, DPR", "Kyrgyz Republic", "Lao PDR", "Lesotho", "Liberia", "Madagascar", "Malawi", "Mali", "Mauritania", "Mozambique", "Myanmar", "Nepal", "Nicaragua", "Niger", "Nigeria", "Pakistan", "Papua New Guinea", "Rwanda", "São Tomé e Príncipe", "Senegal", "Sierra Leone", "Solomon Islands", "Somalia", "Republic of Sudan", "South Sudan", "Tajikistan", "Tanzania", "Timor Leste", "Togo", "Uganda", "Uzbekistan", "Viet Nam", "Yemen", "Zambia", "Zimbabwe"), nrow=57, ncol=1)
apply(tC2, 1, rwmGetISO3)

In the results, some countries are actually given in a slightly different way by GAVI than in R. For instance “Congo, Dem Republic of” should be changed for rworldmap in “Democratic Republic of the Congo” (ISO3 code: COD). Or “Côte d’Ivoire” should be changed for rworldmap in “Ivory Coast” (ISO3 code: CIV). An interesting resource for country names recognised by rworld map is the UN Countries or areas, codes and abbreviations. Once you correct this, you can have your map of GAVI-eligible countries:

GAVIcountries-eligibles-map3-jepoirrier

And here is the code:

# Displays map of GAVI countries
library(rworldmap)
theCountries <- c("AFG", "BGD", "BEN", "BFA", "BDI", "KHM", "CMR", "CAF", "TCD", "COM", "COD", "CIV", "DJI", "ERI", "ETH", "GMB", "GHA", "GIN", "GNB", "HTI", "IND", "KEN", "PRK", "KGZ", "LAO", "LSO", "LBR", "MDG", "MWI", "MLI", "MRT", "MOZ", "MMR", "NPL", "NIC", "NER", "NGA", "PAK", "PNG", "RWA", "STP", "SEN", "SLE", "SLB", "SOM", "SDN", "SSD", "TJK", "TZA", "TLS", "TGO", "UGA", "UZB", "VNM", "YEM", "ZMB", "ZWE")
GaviEligibleDF <- data.frame(country = c("AFG", "BGD", "BEN", "BFA", "BDI", "KHM", "CMR", "CAF", "TCD", "COM", "COD", "CIV", "DJI", "ERI", "ETH", "GMB", "GHA", "GIN", "GNB", "HTI", "IND", "KEN", "PRK", "KGZ", "LAO", "LSO", "LBR", "MDG", "MWI", "MLI", "MRT", "MOZ", "MMR", "NPL", "NIC", "NER", "NGA", "PAK", "PNG", "RWA", "STP", "SEN", "SLE", "SLB", "SOM", "SDN", "SSD", "TJK", "TZA", "TLS", "TGO", "UGA", "UZB", "VNM", "YEM", "ZMB", "ZWE"),
GAVIeligible = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1))
GAVIeligibleMap <- joinCountryData2Map(GaviEligibleDF, joinCode = "ISO3", nameJoinColumn = "country") mapCountryData(GAVIeligibleMap, nameColumnToPlot="GAVIeligible", catMethod = "categorical", missingCountryCol = gray(.8))

Android is catching up iOS

121221-android-mba-rWell, there is nothing new in this statement. The smartphone OS Android is catching up and even overtaking its rival iOS in many domains:

  • more activated products per day and per year in 2011,
  • more Samsung Galaxy S3 (running Android) sold in Q3 2012 than iPhone4 and 5S (running iOS),
  • more devices worldwide,
  • catching up Apple’s market share in tablets,

All this is summarised in an infographics MBA Online designed (the original address is here: http://www.mbaonline.com/android/click at your own risk). It is sweet and colorful, with lots of numbers and some references in the end. Unfortunately these references are embedded in the image so you cannot click on them if you ever want to read more info.

Also as I mentioned previously (for an infographics coming from a similar type of website), I didn’t like much the fact it was very, very long (see reduced copy on the right). It makes things easily read while scrolling down. But ymmv I would have like something a bit more different. For instance I would have seen this more as a succession of slides, a-la Pechakucha maybe (except there is a lot of text). But the restrictive license (CC-by-nc-nd) prohibits derivative works.

So I like my Android device. I like when people promote it, are proud that Android is a success and talk about it. And the web is full of these infographics: a similar story about taking over the world, the successive Android versions (again very long), tastes of Android users (versus iOS users’), a broader smartphone comparison (again very long), a Google search for it, … Choose the one you like!

How to write data from Matlab to Excel (especially when you don’t have Excel)

If you are using Matlab on a MS-Windows PC with MS-Excel installed, there is no problem reading and writing data to Excel (in case your users/customers only understand this software but you still want to do the computations in Matlab). Here is the code to read (1st line) and write (2nd line):

inputs = xlsread('inputfile.xls', 'inData', 'A1:B3');
[writeStatus, writeMsg] = xlswrite('outputfile.xls', myMatrix, 'outData', 'A1');

Now, there are several reasons why you may not be able to read and write directly to an Excel file: you have Matlab but

  • you are on MS-Windows but MS-Excel is not installed (if you use OpenOffice.org, csv files in general or on a server where even Microsoft discourage the use of Excel in back-end, see “More information” here);
  • you are on MacOS;
  • you are on Linux;
  • etc.

matlab rubiks cubeMatlab provides a basic way to read and write in these cases: xlsread() will only read some xls files (saved as version 97-2003, which should not be an issue – at least this is an Excel file) ; xlswrite() will only write a .csv file. So you can’t deliver the Excel output.

Fortunately Open Source software is here to help you!

Indeed two recently published code on Matlab Central allow you to write “directly” to Excel.

The first solution is provided by Marin Deresco and apparently uses his own binder to Excel. The most simple code is:

display(['Add Java paths']);
javaaddpath('D:\JExcelAPI\jxl.jar');
javaaddpath('D:\JExcelAPI\MXL.jar');

display(['Writing matrix to file']);
writeStatus = xlwrite('output.xls', myMatrix, 'outData');
display(['writeStatus: "' num2str(writeStatus) '"']); % 1 = success, 0 = failure

I advise you to put the two .jar files in a central location (as done in the code above). That way all you functions will have access to the same version of these files (and updating them for all your projects will be much easier).

But there are three small issues. First the generated matrix will contain only text fields (and they are formatted as text fields, not numbers) ; it doesn’t really matter because Excel can then use this text as numbers directly. Then the second issue is that although it will actually write the requested data, there will be an error when opening the Excel file if you specified a sheet (‘outData’ here) that already exists. Finally you can only start writing at A1.

The second solution is provided by Alec de Zegher, partly in response to the limitation of the previous code. Alec is taking advantage of the Apache POI java library to handle the writing-side. His code also allows to start writing anywhere in the sheet. And it uses the same call structure:

display(['Add Java paths']);
javaaddpath('D:\poi_library\poi-3.8-20120326.jar');
javaaddpath('D:\poi_library\poi-ooxml-3.8-20120326.jar');
javaaddpath('D:\poi_library\poi-ooxml-schemas-3.8-20120326.jar');
javaaddpath('D:\poi_library\xmlbeans-2.3.0.jar');
javaaddpath('D:\poi_library\dom4j-1.6.1.jar');

display(['Writing matrix to file']);
writeStatus = xlwrite('output.xls', myMatrix, 'outData', 'B2');
display(['writeStatus: "' num2str(writeStatus) '"']); % 1 = success, 0 = failure

As you can see there are some more .jar to add. On the other hand, it uses a syntax that is very similar to the previous one and to the original Matlab command. Thanks, Marin and Alec, for this great help!

This was tested on Matlab R2012a on a Windows XP 64-bits and with the MCR (same version) on a Windows 2008 64-bits – according to the respective pages for the code, it should work on Mac and other platforms/configurations too.
Photo credits: matlab rubiks cube by gusset, on Flickr (licence by-nc-sa)

Android-based smartphones market share in Asia

31%

Tonight I was wondering what was Android market share in Asia. It is 31% according to a recent study from Ericsson’s ConsumerLab group (reported by TechRepublic). Although dominant through most studied countries, Android is not dominant in Singapore (iOS has 46%), in Indonesia (RIM has 29%) nor in Vietnam (Symbian has 26%).

Last year ABI Research released a study where they showed that Android-based smartphones market share grew from 16% in 2010 to 52% in 2011 (but this included tablets and did not cover exactly the same countries as the Ericsson study). Voila 🙂

Idea shared #2 – the feedback toothbrush

After the T-shirt that measures your sleep better than an app, here is idea #2: the toothbrush that provides some feedback.

The idea is simple – so simple it was already applied elsewhere. The idea is to provide feedback about the quality of the way people brush their teeth. The Brushduino focuses on entertaining kids to keep them brushing at the right place for the right amount of time. Other projects (with many variants) focus specifically on time spent brushing.

I think embedded projects can go an extra mile (provided they are small enough). You can embark a gyroscope and take into account the types and amount of movements you make while brushing your teeth. This way you also have the time you did it anyway. The toothbrush could communicate with a computer to transmit the data. I guess being offline would save some space at the price of direct feedback. This direct feedback could also come in a simplified way, a bit like the Nike+ Fuelband does: it is not an exact measure that you need but merely the fact you brushed vigorously enough and during enough time. The way a gyroscope work should give the space covered – indirectly if you covered every teeth (to be checked). Connection of the toothbrush to a smartphone or a computer could provide the numerical data as well as some social features (as long as you think brushing your teeth can be something shared with your friends).

In terms of design it could look simply like this:

This kind of device will not replace advices given by dentists. But it can help / accompany people during their daily activities.

Would you buy this device?

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Idea shared #1 – measure your sleep

I don’t consider having more or better ideas than others. But I gradually realized I have less and less time for some activities like programming, electronics etc. Maybe that’s how we realize we are getting older now adults. So I decided to share these ideas rather than fueling the illusory idea that I will implement them one day.

So idea 1 is about measuring sleep. I recorded animals’sleep during my Ph.D. – but it was thanks to an EEG device. I think that if you want to understand or improve something you have to first measure it in a way or another. So I started to try to measure my own sleep with an app (Sleep Cycle). But despite its good reviews it doesn’t work, at least for me.

For instance the chart below is supposed to represent my sleep cycle for the night of the September 14th, 2012. I was certainly not in deep sleep at 1.30AM (baby did not want me to sleep immediately). I also woke up around 4AM (baby was again the reason). And I woke up at 6.45 (with a backup clock – had to wake up for work)?

My Sleep Graph with Sleep Cycle app for September 14th, 2012 night

The last version of the Sleep Cycle app improves things a bit by providing more statistics (so at last you can rely on the approximate time slept and compare your “sleep” across days etc.), more beautiful gaphs and the ability to download raw data. Don’t be fooled however, “raw data” means only start time, end time, sleep quality (how is it measured?), time in bed, number of wake ups and sleep notes. You unfortunately won’t be able to reproduce anything like the graph above.

Hardware devices like the Wakemate or the Zeo might give better results because part of the solution is using a real accelerometer. But the Up story shows that not everything is obvious in this world.

For me the fundamental flaw is to rely only on body movements to detect, quantify and even score sleep. Of course there is an abundant scientific literature about how muscle tone (of different muscles) is related to sleep stages (see here and here for introductory texts). But this is often measured by electrodes glued on your body.

So I think it could be very easy to develop a simple, cheap “sleep T-shirt” with light electrodes that will just stick to your body when you sleep (and you put enough of them so at any time at least some of them are connected). In fact it might happen that the Rest SleepShirt would already do the job – it’s a pity they don’t elaborate more on how they measure and collect data (but I understand they will want to sell the product later on ;-)). In my idea light wires would then go to a small pouch where they would be connected to something like a LilyPad Arduino (because it is flexible and can be sewed to a T-shirt – there may be other devices available). The LilyPad would serve as data collector or as data transmitter to a computer / a smartphone / a specific receiver (coupled to a real clock, like the Zeo). The advantage would be to remain sole owner of your sleep data – but of course the business plan should include some “social” features 😉

In the end it should look a bit like this:

Idea shared #1 - measure your sleep with sleep T-shirtThe other advantage would be that in such way you may also measure electrical activity through the body.

Will it work? I’m sure of it. Will it be enough to sleep correctly? I don’t think so: it’s not because you measure something that it improves. But at least you will have some clue on what is going on. Some other advices may be interesting. And for the moment nothing replaces a visit to a real doctor / sleep specialist!

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.