Caught in a torrent

Google Spain – can we really be forgotten?

I am not a conspiracy theorist, but I am very concerned that seemingly benign legislation to protect individual privacy could be used to limit access to legitimate information available on the Internet.

On 13th May 2014, the European Court of Justice upheld the ruling by AEDP, the Spanish Data Protection Agency, requiring Google to remove all links to the otherwise legal and factual content of web page about proceedings for the recovery of social security debts of a Spanish national, Mario Costeja Gonzalez.  He complained about this information still being visible following a Google search on his name, despite the fact the debt had been discharged some time ago.  Although the Spanish data protection authority refused to order the removal of the original newspaper announcement, it did rule that links to that site via a personal name search should be removed from the results of a search on his name.  The ECJ maintains that “an obligation may also exist in a case where that name or information is not erased beforehand or simultaneously from those web pages, and even…when its publication in itself on those pages is lawful.

The judgement allows that some personal name searches may be in the public interest and that removal of links may not be necessary in those cases.  A lot hangs on what is meant by “the role played by the data subject in public life”.  It could be argued that in bringing this case, the data subject is in the public eye and therefore plays a role in public life (albeit an unofficial role) because he or she is influencing public policy.

There are a number of practical objections to the judgement.  Firstly it is impossible to guarantee that something that was once on the internet has been deleted.  In a paper called ‘Regulating the Future’ presented at the iFutures conference in Sheffield in 2013 I spoke about the ‘right to be forgotten’ in which the problems of legislating for and policing published information were outlined.  Even if the Internet ceased to operate tomorrow, leakage from satellite communications makes it impossible to guarantee that a given data stream has been destroyed.  Even if data is deleted from online servers, there will be back-ups, archived copies and downloaded copies on unregulated computers and mobile devices which cannot be monitored or controlled.

Secondly, it is difficult to see how a search engine provider can effectively implement such ruling.  Apart from the absurdity of requiring search engine providers to actively censor access to information about a person “following a search made on the basis of that person’s name”, it does not prevent the same material coming up in other searches, such as searches about the property, or about the local authority.  This means that the search engine would have to selectively block the link to the site, according to how it is being searched for.

A third objection is the difficulty in handling the variety of searches undertaken (variations in a person’s name, alternative spellings, mis-spellings etc.) and the way in which search results are tailored to individual user preferences.  Eli Pariser’s book The Filter Bubble provides some excellent examples of this principle at work.  Do we really want a situation where we only see material that is agreeable to us and which does not challenge us in any way?  Tailoring of search results makes it difficult for an individual to ‘know’ what someone else sees when they search for that individual’s name.

A fourth concern is self-censorship by users and producers of information.  This judgement could lead to search engine providers censoring themselves in order to pre-empt potential cases brought to the national data protection authorities in Europe.

Perhaps most disturbing of all is the capricious and arbitrary nature of the acceptability criterion for linking to personal information.  The press release about the judgement says “The Court [European Court of Justice] observes in this regard that even initially lawful processing of accurate data may, in the course of time, become incompatible with the [Data Protection] directive”.  This means that something that starts out being acceptable to link to, could at some arbitrary point in time become unacceptable.  It is not difficult to see how this principle could be subverted by less benign governments to deal with ‘unacceptable’ criticisms and scrutiny from their own citizens.


About the author

David Haynes

David Haynes

David is a Director of Aspire². His interests lie in metadata, information taxonomies and information governance. He is an experienced PRINCE2 practitioner. David leads courses on his specialist areas and is author of ‘Metadata for Information Management and Retrieval’. Currently he is researching on the regulation of information at City University, London.