Category: Computers

Google and the bottom search box

When Google rolled out “Instant“, they also removed the bottom search box. Bad idea.

Google Instant is a nice, web 2.0 improvement to Google “classic” where results appear as soon as you type them in the ad hoc text box. Google claims that Instant can save 2 to 5 seconds per search. Maybe.

But, at the same time, they removed the bottom search box. I extensively used this search box: when you enter your search criteria and look at the results, you may want to refine your search, add some terms, remove or exclude others, etc. With a second search box at the bottom, you can directly do it after having browsed the first bunch of results. Without this box at the bottom, you can’t: you have to think to scroll all the way to the top of the page and actually do the change in the only, upper text box. You lose 2 seconds to scroll back to the top of the page and you may lose some idea on the way (especially if you have 1001 ideas at the same time). When you sometimes perform a lot of searches per day, the time you gain with Instant per search is largely lost by the time spent browsing back to the top. I’m not the only one to think it’s was a bad idea.

But if you want to keep Google as (one of your) your search engine(s) and want to get back the second search/text box at the bottom (and optionally get Instant too), just use “Encrypted Google SSL Beta” (URL in clear: https://encrypted.google.com/).

Happy Software Freedom Day 2010!

Today, September 18th 2010, it’s Freedom Software Day all over the world. It is an annual worldwide celebration of Free Software, a public education effort with the aim of increasing awareness of Free Software and its virtues, and encouraging its use.

On the SFD website, there isn’t a lot of events registered for Belgium. There is only one, in fact, in Oostende (LiLiT is doing an install party in Liege but I can’t see any reference to SFD; still, it’s a good initiative!). Well, a SFD on September 18th in Belgium might not have been a good idea if the goal is to increase awareness of Free Software: more than half of the population is celebrating the Walloon Region or preparing a Sunday without car in Brussels (while others are just looking for a government since April 2010!). So, at a personal level, I decided to give Ubuntu a try (10.04 LTS).

In terms of user experience, you can’t beat the installation process of Ubuntu (my comparison criteria are Fedora 13 and any version of Windows XP, Vista or 7 that are not on a PC-specific image disc). Seven configuration screen with rather simple questions and that’s it. There are choices you can’t make like the selection of software you want to be installed and available on the next reboot. But, most of the general software is there: a web browser, a word processor, some games, a rudimentary movie player and a music player. The “Software Center” is also readily visible so you can’t miss it and it seems to be an obvious choice if you want to install any other software.

New Ubuntu desktop for Freedom Software Day 2010

The real test will now be if one can actually work with it. If I don’t post any furious comment against some features or if I don’t post anything about the installation of some software in the coming days / weeks, you’ll know I’m still working with this Linux flavour.

Browser hardware acceleration issue?

Browser hardware acceleration is meant to render websites faster by allowing the graphics card (its GPU) to directly display “things” (videos, animation, canvas, compositing, etc.) on the screen. By bypassing software rendering systems, lots of websites seem to render faster. All major browsers jumped on this: Firefox, Chrome, Internet Explorer and Opera (post of 2008!).

I understand that enhancing the user’s experience while surfing the web is something that can be interesting. Hardware acceleration opens the door to unseen compositions, to new types of animations, to new kind of applications. Directly in your favourite browser.

Comment if I’m wrong but hardware acceleration will not lead to fragmentation of the web landscape. HTML5 seems to be the standard behind which browsers developers are adding their acceleration engines.

However, an issue (from the user’s point-of-view) will probably be that hardware acceleration will still help the emergence of a consumer-only web. A lot of your applications will be in your browser, with your data in someone else’s data center. You want your data safe? You need to trust your provider’s security measures. You simply want your data on your hard drive? I think you’ll have a problem here. But I agree it’s not the technical implementation that will be responsible for that.

First LaTeX Beamer presentation seen in a proteomic conference There is another issue I see with browser hardware acceleration. And it’s very down-to-earth. As you often encounter in presentation with videos, the presentation is displayed via a beamer but not the video (a black rectangle is displayed instead). You can easily disable hardware acceleration in most presentation software (if it’s not disabled by default). But, with hardware acceleration fully integrated in the browser, what will be displayed with the beamer if we have to do a demo of a website or simply when the presentation software is the browser? A page with patches of black rectangles? I hope not.

Why do I blog this? I enjoy reading about the (technical) details of (browser) hardware acceleration. I am very interested in general in all the new developments in IT regarding the use of GPUs and graphics card computational power to solve current issue or allow future developments. But I’m also using these (new) technologies everyday. So I don’t want that technological improvements on one hand turn to cause trouble on the other hand.

Llinking two recent posts seen elsewhere

  • Namechk.com (Check Username Availability at Multiple Social Networking Sites) bookmarked on delicious.com by Philippe
  • one possible use of the Facebook profile information: generating a good dictionary from fabebook-names-original.txt to brute-force password” seen on Twitter.com/adulau

Now use Namechk to find all combinations of >= 2 letters used on more than 1 service. I guess there is a high probability that two identical username strings on two different services belong to the same physical person. Look at their profile/activities/pages/whatever on the various websites, you have now a wonderfull network of knowledge about these people. I also guess that if a flaw is discovered in one of these services that allows to recover users passwords, you could use the same password on all the other services for the same username.

Or take Alexandre’s fabebook-names-original.txt items and sign in other services with them. You have now saturated the web2.0 space. People will need to be more creative to sign in now.

(ok, I know these service providers should have put some protection in place in order to avoid large-scale abuse of their services)

Tetris wall

Dear wife,

I agree to have the decoration you want everywhere in our new home. You can have all the furniture and appliances you want in the kitchen. I’m OK if all the shelves with my computer books are in the basement. OK too if you don’t want to see the file server in the living room. Agreed: I’ll put back Windows on your laptop. But …

But I absolutely want one wall painted like these:

Tetris wall 1

Tetris wall 2

Jean-Etienne 😉

Photos found on Olybop.info (without original credit). Other walls with Tetris can be found on Flickr.

Bittorrent used to deploy updates

I just watched a video from Larry Gadea working at Twitter: Twitter – Murder Bittorrent Deploy System (speaking at CUSEC 2010).

Briefly, the problem Twitter was facing was the deployment of updates to thousands of servers in a short amount of time and dealing with errors (broken servers, e.g.). A nice, simple, cool and free way of solving this issue was to use the Bittorrent protocol (via Python and a stack of other free software) to actually deploy updates. In summary, you go from a unique repository facing thousands requests approximately at the same time:

And you end up with a nice “distribution chain”:

The beautiful thing is that they now go 75 times faster than before!

And now, the video:

http://vimeo.com/moogaloop.swf?clip_id=11280885&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=00ADEF&fullscreen=1

The Murder software is hosted on Github (Apache 2 license).

Why do I blog this? First, I like to see simple ideas no one had before implemented like this. I also wonder how other companies facing the same problems are doing (status.net for example ; I don’t think it could be useful for Forban). Finally, you see, Bittorrent is sometimes about good stuff too!

Cognitive Surplus visualised

In the 300-and-more RSS items in my aggregator this week, there are 2 great ones from Information is Beautiful, a blog gathering (and publishing its own) nice ways to visualise data.

The first one is based on a talk by Clay Shirky who, in turn, was referencing his book Cognitive Surplus. In Cognitive Surplus visualized, David McCandless just represented one of Shirky’s ideas: 200 billion hours are spent each year by US adults just watching TV whereas only 100 million hours were necessary to create Wikipedia (I guess the platform + the content) …

Cognitive Surplus visualised from Information Is Beautiful

It makes you think about either the waste television helps to produce either the potential of human brain(s) if relieved from the burden of television.

The second interesting post appeared in fact in information aesthetics, a blog where form follows data (referencing Information is Beautiful but I can’t find this post). In Top Secret America: Visualizing the National Security Buildup in the U.S., Andrew Vande Moere relates “an extensive investigative project of the Washington Post that describes the huge national security buildup in the United States after the September 11 attacks”. The project website contains all the ingredients for a well-documented investigation with the addition of interactive maps and flash-based interfaces allowing the user to build his/her own view on the project.

Top Secret America from the Washington Post

It’s nice to see investigative journalism combined with beautiful data visualisation and handling!

Network bandwidth during lecture

One of the differences between university lectures in Belgium and in the United States of America is that, in the US, most of the students are carefully “listening” to the lecture while having their laptop on and connected to the internet. I didn’t departed from this custom 🙂

Yesterday, I was trying to download a Linux DVD (that’s what university networks are for, isn’t it?) and observed an interesting pattern in the network speed during the lecture. If I assume that the total bandwidth available remains constant, the one available to me was drastically reduced as the lecture was going on.

evolution of network speed

Now, if you think that the y-axis isn’t about the remaining network bandwidth but about the level of attention in students, you might not be far from the truth 😉 Attention drops rather quickly during the theoretical lecture and people were very busy during the practicals. Note that the remaining bandwidth was also very small during Mundial matches …

FluTE makefile for wxDev-C++ (Windows)

FluTE is an influenza epidemic simulation model written by Dennis L. Chao at CSQUID. It works out-of-the box on GNU/Linux (just type make and run it).

I wanted to see how it works. But since I’m temporarily stuck with a Windows laptop, I downloaded a free C++ compiler for Windows (wxDev-C++), imported all the files in a project and compiled. For those who want to try, here is the project file and the specific makefile in a zip file (2 kb). Just decompress the FluTE archive (I used version 1.15), copy the two files from the zip file above and launch the IDE. In the project options (Alt+P), specify the custom makefile (in the "Makefile" tab) as the one from the zip file above. Compile (Ctrl+F9). Done.

On my Intel Core2 Duo T5450 (2Gb RAM), it took 6 minutes to simulate the "two-dose" example.

Please note that I didn’t try to compile with OpenMPI. Maybe for next time.

Software license and use of end-product

In one of his buzz, Cédric Bonhomme drew my attention on the Highcharts javascript library. This library can produce beautiful charts of various types with some Ajax interaction. The only negative point imho is that it is dual-licensed and all cases deprive you from your freedom:

  • there is a first Creative Commons Attribution-NonCommercial 3.0 License: you can use the library for your non-profit website (see details on the licensing page) ;
  • there is a commercial license for any other website.

Now what if we only need the end-product, i.e. the resulting chart, in a commercial environment? What is covered by the license is just the re-use of the javascript library in a website, not the resulting chart. If a company choose to use Highcharts internally to render some beautiful charts and just publish (*) the resulting image, I guess they can just download the library and use it (* by “publishing”, I mean: publish a scientific paper in a peer-reviewed journal, not publishing on its website). On the other hand, no one ever questioned the fact commercial companies have licenses for all the proprietary software they use to produce anything else, from charts to statistical data, just because they publish results with these software as tools. So the “trick” here would be that, by changing the medium on which you display end-results (from website to paper, even if it’s in PDF on the journal website), you can use the free-to-download license, even in a commercial environment, for an article from a commercial company. I’m not sure this was the original intention of Highslide Software.