Category Archives: DP Fundamentals

Here’s how Internet’s inventor wants to reinvent it and why this is great news for privacy

Last May I had the chance to meet Prof. Tim Berners-Lee and one of the lead researchers in his team at MIT, Andrei Sambra, when I accompanied Giovanni Buttarelli, the European Data Protection Supervisor, in his visit at MIT.

Andrei presented then the SOLID project, and we had the opportunity to discuss about it with Prof. Berners-Lee, who leads the work for SOLID. The project “aims to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy.” In other words, the researchers want to de-centralise the Internet.

“Solid (derived from “social linked data”) is a proposed set of conventions and tools for building decentralized social applications based on Linked Data principles. Solid is modular and extensible and it relies as much as possible on existing W3C standards and protocols”, as explained on the project’s website.

Andrei explains in a blog post that, in a first step, the project finds solutions “to decouple the applications from the data they produce, and then to decouple the data from the actual storage server.”

“This means that applications and servers are interchangeable, and they can be swapped without impacting the most important part – your data. It’s all about freedom of choice.” (Read the entire explanation in this blog post)

I was so excited to find out about the efforts conducted by Prof. Berners-Lee and his team. At the end of the presentation and the discussion, I asked, just to make sure I understood it correctly: “Are you trying to reinvent the Internet?”. And Prof. Berners-Lee replied, simply: “Yes”. A couple of weeks later I saw this article in the New York Times: “The Web’s creator looks to reinvent it” So I did understand correctly 🙂

But why was I so excited? Because I saw first hand that some of the greatest minds in the world are working to bring back control to the individual on the Internet. Some of the greatest minds in the world are not giving up on privacy, irrespective of how many “Privacy is dead” books and articles are published, irrespective of how public and private policymakers, lobbyists and Courts understand at this moment in history the value of privacy and of what Andrei called “freedom of choice” in the digital world.

I was excited because I found out about a common goal us, the legal privacy bookworms/occasional policymakers, and the IT masterminds have: empower the ‘data subject’, the ‘user’, well, the human being, in the new Digital Age, put them back in control and curtail unnecessary invasions of privacy for all kind of purposes (profit making to security).

In fact, my entire PhD thesis was built on the assumption that the rights of the data subject, as they are provided in EU law (rights to access, to erase, to object, to be informed, to oppose automated decision making) are all prerogatives of the individual that aim to give control to the individual over his or her data. So if technical solutions are developed for this kind of control to be practical and effective, I am indeed excited about it!

I also realised that some of the provisions that survived incredible, multifaceted opposition to make it to the new General Data Protection Regulation are in fact tenable, like the right to data portability (check out Article 20 of the GDPR, here).

This is why, when I saw that today the world celebrates 25 years since the Internet went public, I remembered this moment in May and I wanted to share it with you. Here’s to a decentralised Internet!

Later Edit: The man itself says August 23 is not exactly accurate. Nor 25 years! In any case, it was still a good day for me to think about all of the above and share it with you 🙂


“The EU-US interface: Is it possible?” CPDP2015 panel. Recommendation and some thoughts

The organizers of CPDP 2015 made available on their youtube channel some of the panels from this year’s conference, which happened last week in Brussels. This is a wonderful gift for people who weren’t able to attend CPDP this year (like myself). So a big thank you for that!

While all of them seem interesting, I especially recommend the “EU-US interface: Is it possible?” panel. My bet is that the EU privacy legal regime/US privacy legal regime dichotomy and the debates surrounding it will set the framework of “tomorrow”‘s global protection of private life.

Exactly one year ago I wrote a 4 page research proposal for a post-doc position with the title “Finding Neverland: The common ground of the legal systems of privacy protection in the European Union and the United States”. A very brave idea, to say the least, in a general scholarly environment which still widely accepts  Whitman’s liberty vs dignity solution as a fundamental “rift” between the American and European privacy cultures.

The idea I wanted to develop is to stop looking at what seems to be fundamental differences and start searching a common ground from which to build new understandings of protecting private life  accepted by both systems.

While it is true that, for instance, a socket in Europe is not the same as a socket in the US (as a traveller between the two continents I am well aware of that), fundamental human values do not change while crossing the ocean. Ultimately, I can convert the socket into metaphor and say that even if the continents use two very different sockets, the function of those sockets is the same – they are a means to provide energy so that one’s electronic equipment works. So which is this “energy” of the legal regime that protects private life in Europe and in the US?

My hunch is that this common ground is “free will”, and I have a bit of Hegel’s philosophy to back this idea. My research proposal was rejected (in fact, by the institute which, one year later, organized this panel at CPDP 2015 on the EU-US interface in privacy law). But, who knows? One day I may be able to pursue this idea and make it useful somehow for regulators that will have to find this common ground in the end.

You will discover in this panel some interesting ideas. Margot Kaminski (The Ohio State University Moritz College of Law) brings up the fact that free speech is not absolute in the US constitutional system – “copyright protection can win over the first amendment” she says. This argument is important in the free speech vs privacy debate in the US, because it shows that free speech is not “unbeatable”. It could be a starting point, among others, in finding some common ground.

Pierluigi Perri (University of Milan) and David Thaw (University of Pittsburgh) seem to be the ones that focus the most on the common grounds of the two legal regimes. They say that, even if it seems that one system is more preoccupied with state intrusions in private life and the other with corporate intrusions, both systems share a “feared outcome – the chilling effect on action and speech” of these intrusions. They propose a “supervised market based regulation” model.

Dennis Hirsch (Capital University Law School) speaks about the need of global privacy rules or something approximating them, “because data moves so dynamically in so many different ways today and it does not respect borders”. (I happen to agree with this statement – more details, here). Dennis argues in favour of sector co-regulation, that is regulation by government and industry, to be applied in each sector.

Other contributions are made by Joris van Hoboken, University of Amsterdam/New York University (NL/US) and Eduardo Ustaran, Hogan Lovells International (UK).

The panel is chaired by Frederik Zuiderveen Borgesius, University of Amsterdam  and organised by Information Society Project at Yale Law School.


What Happens in the Cloud Stays in the Cloud, or Why the Cloud’s Architecture Should Be Transformed in ‘Virtual Territorial Scope’

This is the paper I presented at the Harvard Institute for Global Law and Policy 5th Conference, on June 3-4, 2013. I decided to make it available open access on SSRN. I hope you will enjoy it and I will be very pleased if any of the readers would provide comments and ideas. The main argument of the paper is that we need global solutions for regulating cloud computing. It begins with a theoretical overview on global governance, internet governance and territorial scope of laws, and it ends with three probable solutions for global rules envisaging the cloud. Among them, I propose the creation of a “Lex Nubia” (those of you who know Latin will know why 😉 ).  My main concern, of course, is related to privacy and data protection in the cloud, but that is not the sole concern I deal with in the paper.


The most common used adjective for cloud computing is “ubiquitous”. This characteristic poses great challenges for law, which might find itself in the need to revise its fundamentals. Regulating a “model” of “ubiquitous network access” which relates to “a shared pool of computing resources” (the NIST definition of cloud computing) is perhaps the most challenging task for regulators worldwide since the appearance of the computer, both procedurally and substantially. Procedurally, because it significantly challenges concepts such as “territorial scope of the law” – what need is there for a territorial scope of a law when regulating a structure which is designed to be “abstracted”, in the sense that nobody knows “where things physically reside” ? Substantially, because the legal implications in connection with cloud computing services are complex and cannot be encompassed by one single branch of law, such as data protection law or competition law. This paper contextualizes the idea of a global legal regime for providing cloud computing services, on one hand by referring to the wider context of global governance and, on the other hand, by pointing out several solutions for such a regime to emerge.

You can download the full text of the paper following this link:

“Purpose limitation”, explained by the Article 29 WP

On April 2, Article 29 WP published its Opinion on “purpose limitation”, one of the safeguards which make data protection efficient in Europe.

Purpose limitation protects data subjects by setting limits on how data controllers are able to use their data while also offering some degree of flexibility for data controllers. The concept of purpose limitation has two main building blocks: personal data must be collected for ‘specified, explicit and legitimate’ purposes (purpose specification) and not be ‘further processed in a way incompatible’ with those purposes (compatible use).

Further processing for a different purpose does not necessarily mean that it is incompatible:
compatibility needs to be assessed on a case-by-case basis. A substantive compatibility assessment requires an assessment of all relevant circumstances. In particular, account should be taken of the following key factors:

– the relationship between the purposes for which the personal data have been collected and the purposes of further processing;
– the context in which the personal data have been collected and the reasonable expectations of the data subjects as to their further use;
– the nature of the personal data and the impact of the further processing on the data subjects;
– the safeguards adopted by the controller to ensure fair processing and to prevent any undue impact on the data subjects.

Conclusions of the Opinion:

First building block: ‘specified, explicit and legitimate’ purposes

With regard to purpose specification, the WP29 highlights the following key considerations:

 Purposes must be specific. This means that – prior to, and in any event, no later than the time when the collection of personal data occurs – the purposes must be precisely and fully identified to determine what processing is and is not included within the specified purpose and to allow that compliance with the law can be assessed and data protection
safeguards can be applied.

 Purposes must be explicit, that is, clearly revealed, explained or expressed in some form in order to make sure that everyone concerned has the same unambiguous understanding of the purposes of the processing irrespective of any cultural or linguistic diversity. Purposes may be made explicit in different ways.

 There may be cases of serious shortcomings, for example where the controller fails to specify the purposes of the processing in sufficient detail or in a clear and unambiguous language, or where the specified purposes are misleading or do not correspond to reality. In any such situation, all the facts should be taken into account to determine the actual purposes, along with the common understanding and reasonable expectations of the data subjects based on the context of the case.

 Purposes must be legitimate. Legitimacy is a broad requirement, which goes beyond a simple cross-reference to one of the legal grounds for the processing referred to under Article 7 of the Directive. It also extends to other areas of law and must be interpreted within the context of the processing. Purpose specification under Article 6 and the requirement to have a lawful ground for processing under Article 7 of the Directive are two separate and cumulative requirements.

 If personal data are further processed for a different purpose
– the new purpose/s must be specified (Article 6(1)(b)), and
– it must be ensured that all data quality requirements (Articles 6(1)(a) to (e)) are also
satisfied for the new purposes.

Second building block: compatible use
 Article 6(1)(b) of the Directive also introduces the notions of ‘further processing’ and ‘incompatible’ use. It requires that further processing must not be incompatible with the purposes for which personal data were collected. The prohibition of incompatible use sets a limitation on further use. It requires that a distinction be made between further use that is ‘compatible’, and further use that is ‘incompatible’, and therefore, prohibited.

 By prohibiting incompatibility rather than requiring compatibility, the legislator seems to give some flexibility with regard to further use. Further processing for a different purpose does not necessarily and automatically mean that it is incompatible, as compatibility needs to be assessed on a case-by-case basis.

 In this context, the WP29 emphasises that the specific provision in Article 6(1)(b) of the Directive on ‘further processing for historical, statistical or scientific purposes’ should be seen as a specification of the general rule, while not excluding that other cases could also be considered as ‘not incompatible’. This leads to a more prominent role for different kinds of safeguards, including technical and organisational measures for functional separation, such as full or partial anonymisation, pseudonymisation, aggregation of data, and privacy enhancing technologies.

The Opinion is available HERE.

Going back to basics

Being in the process of writing my thesis, I have realized how important it is to stop from searching through the whirling flux of current information and new developments in the area of privacy and information technology, or more generally “law and technology”, and look back at the beginning of this craziness.

One might find answers for questions she didn’t even know she needed to answer. Or, at least, she might find some reassurance that the legal thought in this field is capable of steadiness and coherence.

This is why I decided to share with you the principles enshrined in the first “internationalization” effort of personal data protection that I know of, RESOLUTION (73) 22 ON THE PROTECTION OF THE PRIVACY OF INDIVIDUALS VIS-A-VIS ELECTRONIC DATA BANKS IN THE PRIVATE SECTOR (Adopted by the Committee of Ministers of the Council of Europe on 26 September 1973).


The information stored should be accurate and should be kept up to date. In general, information relating to the intimate private life of persons or information which might lead to unfair discrimination should not be recorded or, if recorded, should not be disseminated.


The information should be appropriate and relevant with regard to the purpose for which it has been stored.


The information should not be obtained by fraudulent or unfair means.


Rules should be laid down to specify the periods beyond which certain categories of information should no longer be kept or used.


Without appropriate authorisation, information should not be used for purposes other than those for which it has been stored, nor communicated to third parties.


As a general rule, the person concerned should have the right to know the information stored about him, the purpose for which it has been recorded, and particulars of each release of this information.


Every care should be taken to correct inaccurate information and to erase obsolete information or information obtained in an unlawful way.


Precautions should be taken against any abuse or misuse of information. Electronic data banks should be equipped with security systems which bar access to the data held by them to persons not entitled to obtain such information, and which provide for the detection of misdirections of information, whether intentional or not.


Access to the information stored should be confined to persons who have a valid reason to know it. The operating staff of electronic data banks should be bound by rules of conduct aimed at preventing the misuse of data and, in particular, by rules of professional secrecy.


Statistical data should be released only in aggregate form and in such a way that it is impossible to link the information to a particular person.

The original text of the Resolution can be found here.

We encounter access rights, purpose limitation, erasure of obsolete data and even the idea of anonymization. In 1973.

I got my ounce of inspiration from wondering how the essence of these principles are still relevant so many decades after they were published. And I hope you will also find yours.


DP fundamentals: Few facts on Information and Access

One of the concrete data protection rights individuals enjoy in Europe are the right to access data collected on them and the right to be informed about the processing of their data.

These rights are provided under Articles 10, 11 and 12 of the Directive 95/46. However, a great emphasis is made on Article 12, which contains both the right to access and the right to confirmation of undergoing processing of personal data by a certain processor or operator.

Prof. Christopher Kuner writes in one of his books that “The rights granted to data subjects under Article 12 can present substantial difficulties for companies. First, given the distributed nature of computing nowadays, personal data may be contained in a variety of databases located in different geographic regions, so that it can be difficult to locate all the data necessary to respond to a data subject’s request. Indeed locating all the data pertaining to a particular data subject in order to allow him to know what data are being held about him to assert his rights of erasure, blockage etc. may require the data controller to comb through masses of data contained in various databases, which in itself could lead to data protection risks”.

He also writes that another source of problems with complying with Art. 12 is that Member States have transposed differently this provision with regard to the costs of access and the number of times it can be exercised. “For instance, in Finland the data controller may charge its costs in accessing the data and requests by data subjects are limited at one per year, while in UK the controller may charge a fee of up to 10 pounds for access to each entry and reasonable time must elapse between requests. This disharmony of the law creates problems for data controllers that process data of data subjects from different Member States.”

Source: Christopher Kuner, European Data Privacy Law and Online Business, Oxford University Press, 2003 (p. 71, 72)

You can find the book here:

European Data Privacy Law and Online Business