In my opinion privacy issues are a by-product of information conservation times reaching infinite.
For centuries and more humans were used to their own type of memory. When information reaches the brain, it is stored in short-term memory. When relevant and/or repeated, it is gradually consolidated into long-term memory (this is roughly the process).
The invention of oral transmission of knowledge, written transmission (incl. Gutenberg) and, to a certain extend, internet, all these successively increased the duration of retention of information shared with others. The switch from oral to written transmission of knowledge also sped up the dissemination of information as well as its fixed, un-(or less-) interpreted nature.
With the internet (“1.0” in order to put some buzzword) the duration of information is also extended but somehow limited ; it was merely a copy of printing (except speed of transmission). Take this blog, for instance: information stored here will stay as long as I maintain or keep the engine alive. The day I decide to delete it, information is gone. And the goal of internet was to be able to reach information where it is issued, even if there are troubles in communication pipes.
However on top of this internet came a serie of tools like search engines (“Google”) and centralized social networks (“Facebook”). Now this information is copied, duplicated, reproduced, either because of the digital nature of the medium that allows that with ease. But also because these services deliberately concentrate the information otherwise spread. Google concentrate (part of) the information in its own datacenters in order to extract other types of information and serves searches faster. Facebook (and other centralized social networks) asks users to voluntarily keep their (private) information in their own data repository. And apparently the NSA is also building its own database about us at its premises.
In my opinion, whenever we were sharing information before, privacy issues were already there (what do you share? to whom? in which context? …). But the duration of information is now becoming an issue.
Last week Google announced it will shut down its Reader service. It is a web-based RSS reader. It therefore allows to be kept updated of news from around the net in a central location. I liked the service for 3 reasons (on top of the fact it’s free, 0$, to use):
It’s web-based, accessible from anywhere/everywhere with a simple browser;
It’s text-based, you can quickly scan headlines and use the powerful search function from Google;
It’s backed by an API so you can use it via different apps on different platforms and they all stay synchronised (the web/mobile version of Reader is not as efficient as the web/desktop version; hence the proliferation of apps using Reader as a backbone).
An interesting solution could be an Evernote RSS reader. Evernote has already a portfolio of application ranging from a note-taking software, screenshots, drawing, food, … They have a synchronisation process in place. Why not a RSS reader then?
But what can be done for free (as in free speech)? One of the solution is Owncloud (AGPL) and they recently released a RSS reader add-on. Another solution could be pyAggr3g470r, a news aggregator written in Python. And I was wondering why there isn’t just a simple API that would allow any kind of application to connect, update and display RSS feed. Something like the NewsCredNews API but free, simpler to use than Owncloud and with apps/website interface for mobile devices. And a poney with that, please.
Google+ (G+) is a social networking and identity service operated by Google. It started a few months ago like a closed service from where you can’t get out any data and where the only possible interaction (read/write/play) is only possible via the official interfaces (i.e. the web and android clients). Google promised to release a public API and it partly did so tonight, here.
As they stated, “this initial API release is focused on public data only — it lets you read information that people have shared publicly on Google+” (emphasis is mine). So you can already take most of your data out of G+ (note that it was already possible to download your G+ stream with Takeout from the Google Data Liberation Front). As usual, it’s a RESTful API with OAuth authorization. It comes with its own rules and terms (it could be interesting to add to GooDiff). The next step would be to be able to directly write something on Google+.
I only tried to try the examples so far. But unfortunately I got an authorization error. I won’t go further for tonight but their error screen is interesting 🙂
During lunch time, I discovered an old street art video (well, old = 2010) where people poured hundreds liters of painting on Rosenthaler Platz (Berlin, Deutschland) to visualize traffic patterns (below: screenshot and video).
When Google rolled out “Instant“, they also removed the bottom search box. Bad idea.
Google Instant is a nice, web 2.0 improvement to Google “classic” where results appear as soon as you type them in the ad hoc text box. Google claims that Instant can save 2 to 5 seconds per search. Maybe.
But, at the same time, they removed the bottom search box. I extensively used this search box: when you enter your search criteria and look at the results, you may want to refine your search, add some terms, remove or exclude others, etc. With a second search box at the bottom, you can directly do it after having browsed the first bunch of results. Without this box at the bottom, you can’t: you have to think to scroll all the way to the top of the page and actually do the change in the only, upper text box. You lose 2 seconds to scroll back to the top of the page and you may lose some idea on the way (especially if you have 1001 ideas at the same time). When you sometimes perform a lot of searches per day, the time you gain with Instant per search is largely lost by the time spent browsing back to the top. I’m not the only one to think it’s was a bad idea.