Tag Archives: directive 95/46

Planet49 CJEU Judgment brings some ‘Cookie Consent’ Certainty to Planet Online Tracking

The Court of Justice of the European Union published yesterday its long-awaited judgment in the Planet49 case, referred by a German Court in proceedings initiated by a non-governmental consumer protection organization representing the participants to an online lottery. It dealt with questions which should have been clarified long time ago, after Article 5(3) was introduced in Directive 2002/58 (the ‘ePrivacy Directive’) by an amendment from 2009, with Member States transposing and then applying its requirements anachronistically:

  • Is obtaining consent through a pre-ticked box valid when placing cookies on website users’ devices?
  • Must the notice given to the user when obtaining consent include the duration of the operation of the cookies being placed and whether or not third parties may have access to those cookies?
  • Does it matter for the application of the ePrivacy rules whether the data accessed through the cookies being placed is personal or non-personal?

The Court answered all of the above, while at the same time signaling to Member States that a disparate approach in transposing and implementing the ePrivacy Directive is not consistent with EU law, and setting clear guidance on what ‘specific’, ‘unambiguous’ and ‘informed’ consent means.

The core of the Court findings is that:

  • pre-ticked boxes do not amount to valid consent,
  • expiration date of cookies and third party sharing should be disclosed to users when obtaining consent,
  • different purposes should not be bundled under the same consent ask,
  • in order for consent to be valid ‘an active behaviour with a clear view’ (which I read as ‘intention’) of consenting should be obtained (so claiming in notices that consent is obtained by having users continuing to use the website very likely does not meet this threshold) and,
  • (quite consequential), these rules apply to cookies regardless of whether the data accessed is personal or not.

Unfortunately, though, the Court did not tackle one other very important issue: what does ‘freely given’ consent mean? In other words, would requiring and obtaining consent for placing cookies with the purpose of online tracking for behavioural advertising as a condition to access an online service, such as an online lottery (as in Planet49’s case), be considered as ‘freely given’ consent?

An answer to this question would have affected all online publishers and online service providers that condition access to their services to allowing online behaviour tracking cookies being installed on user devices and rely on ‘cookie walls’ as a source of income for their businesses. What is interesting is that the Court included a paragraph in the judgment specifically enunciating that it does not give its view on this issue because it was not asked to do so by the referring German Court (paragraph 64). Notably, ‘freely given’ is the only of the four conditions for valid consent that the Court did not assess in its judgment and that it specifically singled out as being left out in the open.

Finally, one very important point to highlight is that the entirety of the findings were made under the rules for valid consent as they were provided by Directive 95/46. The Court even specified that its finding concerning ‘unambiguous’ consent is made under the old directive. This is relevant because the definition of consent in Article 2(h) of Directive 95/46 only refers to ‘any freely given specific and informed indication’ of agreement. However, Article 7(a) of the directive provides that the data subject’s consent may make a processing lawful if it was given ‘unambiguously’.

With the GDPR, the four scattered conditions have been gathered under Article 4(11) and have been reinforced by clearer recitals. The fact remains that conditions for valid consent were just as strong under Directive 95/46. The Court almost ostensibly highlights that its interpretation is made on the conditions provided under the old legal regime and they only apply to the GDPR ‘a fortiori‘ (paragraph 60); (see here for what a fortiori means in legal interpretation).

Consequently, it seems that consent obtained for placing cookies with the help of pre-ticked boxes or through inaction or action without intent to give consent, even prior to the GDPR entering into force, has been unlawfully obtained. It remains to be seen if any action by supervisory authorities will follow to tackle some of those collections of data built relying on unlawfully obtained consent, or whether they will take a clean slate approach.

For a deeper dive into the key findings of the Planet49 CJEU judgment, read below:

Discrepancies in applying ePrivacy at Member State level, unjustifiable based on Directive’s text

Before assessing the questions referred on substance, the Court makes some preliminary findings. Among them, it finds that ‘the need for a uniform application of EU law and the principle of equality require that the wording of a provision of EU law which makes no express reference to the law of the Member States for the purpose of determining its meaning and scope must normally be given an autonomous and uniform interpretation throughout the European Union’ (paragraph 47). Article 5(3) of the ePrivacy Directive does not provide any room for Member State law to determine the scope and meaning of its provisions, by being sufficiently clear and precise in what it asks the Member States to do (see paragraph 46 for the Court’s argument).

In practice, divergent transposition and implementation of the ePrivacy Directive has created different regimes across the Union, which had consequences for the effectiveness of its enforcement.

‘Unambiguous’ means ‘active behavior’ and intent to give consent

The Court starts its assessment from a linguistic interpretation of the wording of Article 5(3) of Directive 2002/58. It notes that the provision doesn’t require a specific way of obtaining consent to the storage of and access to cookies on users’ devices. The Court observes that ‘the wording ‘given his or her consent’ does however lend itself to a literal interpretation according to which action is required on the part of the user in order to give his or her consent.

In that regard, it is clear from recital 17 of Directive 2002/58 that, for the purposes of that directive, a user’s consent may be given by any appropriate method enabling a freely given specific and informed indication of the user’s wishes, including by ticking a box when visiting an internet website‘ (paragraph 49).

The Court highlights that per Article 2(f) of Directive 2002/58 the meaning of a user’s ‘consent’ under the ePrivacy Directive is meant to be the same as that of a data subject’s consent under Directive 95/46 (paragraph 50). By referring to Article 2(h) of the former data protection directive, the Court observes that ‘the requirement of an ‘indication’ of the data subject’s wishes clearly points to active, rather than passive, behaviour’ (paragraph 52). The Court then concludes that ‘consent given in the form of a preselected tick in a checkbox does not imply active behaviour on the part of a website user’ (paragraph 52).

Interestingly, the Court points out that this interpretation of what ‘indication’ means ‘is borne out by Article 7 of Directive 95/46’ (paragraph 53), and in particular Article 7(2) which ‘provides that the data subject’s consent may make such processing lawful provided that the data subject has given his or her consent ‘unambiguously’’ (paragraph 54). So even if the definition of consent in Directive 95/46 does not refer to this condition in particular, the Court nevertheless anchored its main arguments in it.

The Court then made another important interpretation concerning what ‘unambiguous’ consent means: ‘Only active behaviour on the part of the data subject with a view to giving his or her consent may fulfil that requirement’ (paragraph 54). This wording (‘with a view to’) suggests that there is a condition of willfulness, of intent to give consent in order for the indication of consent to be lawful.

In addition, to be even clearer, the Court finds that ‘it would appear impossible in practice to ascertain objectively whether a website user had actually given his or her consent to the processing of his or her personal data by not deselecting a pre-ticked checkbox nor, in any event, whether that consent had been informed. It is not inconceivable that a user would not have read the information accompanying the preselected checkbox, or even would not have noticed that checkbox, before continuing with his or her activity on the website visited” (paragraph 55).

A fortiori, it appears impossible in practice to ascertain objectively whether a website user had actually given his or her consent to the processing of his or her personal data by merely continuing with his or her activity on the website visited (continuing browsing or scrolling), nor whether the consent has been informed, provided that the information given to him or her does not even include a pre-ticked checkbox which would at least give the opportunity to uncheck the box. Also, just like the Court points out, it is not inconceivable that a user would not have read the information announcing him or her that by continuing to use the website they give consent.

With these two findings in paragraphs 54 and 55 the Court seems to clarify once and for all that informing users that by continuing their activity on a website signifies consent to placing cookies on their device is not sufficient to obtain valid consent under the ePrivacy Directive read in the light of both Directive 95/46 and the GDPR.

‘Specific’ means consent can’t be inferred from bundled purposes

The following condition that the Court analyzes is that of specificity. In particular, the Court finds that ‘specific’ consent means that ‘it must relate specifically to the processing of the data in question and cannot be inferred from an indication of the data subject’s wishes for other purposes” (paragraph 58). This means that bundled consent will not be considered valid and that consent should be sought granularly for each purpose of processing.

‘Informed’ means being able to determine the consequences of any consent given

One of the questions sent for a preliminary ruling by the German Court concerned specific categories of information that should be disclosed to users in the context of obtaining consent for placing cookies. Article 5(3) of the ePrivacy Directive requires that the user is provided with ‘clear and comprehensive information’ in accordance with Directive 95/46 (now replaced by the GDPR). The question was whether this notice must also include (a) the duration of the operation of cookies and (b) whether or not third parties may have access to those cookies.

The Court clarified that providing ‘clear and comprehensive’ information means ‘that a user is in a position to be able to determine easily the consequences of any consent he or she might give and ensure that the consent given is well informed. It must be clearly comprehensible and sufficiently detailed so as to enable the user to comprehend the functioning of the cookies employed’ (paragraph 74). Therefore, it seems that using language that is easily comprehensible for the user is important, just as it is important painting a full picture of the function of the cookies for which consent is sought.

The Court found specifically with regard to cookies that ‘aim to collect information for advertising purposes’ that ‘the duration of the operation of the cookies and whether or not third parties may have access to those cookies form part of the clear and comprehensive information‘ which must be provided to the user (paragraph 75).

Moreover, the Court adds that ‘information on the duration of the operation of cookies must be regarded as meeting the requirement of fair data processing‘ (paragraph 78). This is remarkable, since the Court doesn’t usually make findings in its data protection case-law with regard to the fairness of processing. Doubling down on its fairness considerations, the Court goes even further and links fairness of the disclosure of the retention time to the fact that ‘a long, or even unlimited, duration means collecting a large amount of information on users’ surfing behaviour and how often they may visit the websites of the organiser of the promotional lottery’s advertising partners’ (paragraph 78).

It is irrelevant if the data accessed by cookies is personal or anonymous, ePrivacy provisions apply regardless

The Court was specifically asked to clarify whether the cookie consent rules in the ePrivacy Directive apply differently depending on the nature of the data being accessed. In other words, does it matter that the data being accessed by cookie is personal or anonymized/aggregated/de-identified?

First of all, the Court points out that in the case at hand, ‘the storage of cookies … amounts to a processing of personal data’ (paragraph 67). That being said, the Court nonetheless notes that the provision analyzed merely refers to ‘information’ and does so ‘without characterizing that information or specifying that it must be personal data’ (paragraph 68).

The Court explained that this general framing of the provision ‘aims to protect the user from interference with his or her private sphere, regardless of whether or not that interference involves personal data’ (paragraph 69). This finding is particularly relevant for the current legislative debate over the revamp of the ePrivacy Directive. It is clear that the core difference between the GDPR framework and the ePrivacy regime is what they protect: the GDPR is concerned with ensuring the protection of personal data and fair data processing whenever personal data is being collected and used, while the ePrivacy framework is concerned with shielding the private sphere of an individual from any unwanted interference. That private sphere/private center of interest may include personal data or not.

The Court further refers to recital 24 of the ePrivacy Directive, which mentions that “any information stored in the terminal equipment of users of electronic communications networks are part of the private sphere of the users requiring protection under the European Convention for the Protection of Human Rights and Fundamental Freedoms. That protection applies to any information stored in such terminal equipment, regardless of whether or not it is personal data, and is intended, in particular, as is clear from that recital, to protect users from the risk that hidden identifiers and other similar devices enter those users’ terminal equipment without their knowledge” (paragraph 70).

Conclusion

The judgment of the CJEU in Planet49 provides some much needed certainty about how the ‘cookie banner’ and ‘cookie consent’ provisions in the ePrivacy Directive should be applied, after years of disparate approaches from national transposition laws and supervisory authorities which lead to a lack of effectiveness in enforcement and, hence, compliance. The judgment does leave open on ardent question: what does ‘freely given consent’ mean? It is important to note nonetheless that before reaching the ‘freely given’ question, any consent obtained for placing cookies (or similar technologies) on user devices will have to meet all of the other three conditions. If only one of them is not met, then that consent is invalid.

***

You can refer to this summary by quoting G. Zanfir-Fortuna, ‘Planet49 CJEU Judgment brings some ‘Cookie Consent’ Certainty to Planet Online Tracking’, http://www.pdpecho.com, published on October 3, 2019.

The right to be forgotten goes back to the CJEU (with Google, CNIL, sensitive data, freedom of speech)

The Conseil d’Etat announced today that it referred several questions to the Court of Justice of the EU concerning the interpretation of the right to be forgotten, pursuant to Directive 95/46 and following the CJEU’s landmark decision in the Google v Spain case.

The questions were raised within proceedings involving the application of four individuals to the Conseil d’Etat to have decisions issued by the CNIL (French DPA) quashed. These decisions rejected their requests for injunctions against Google to have certain Google Search results delisted.

According to the press release of the Conseil d’Etat, “these requests were aimed at removing links relating to various pieces of information :a video that explicitly revealed the nature of the relationship that an applicant was deemed to have entertained with a person holding a public office; a press article relating to the suicide committed by a member of the Church of Scientology, mentioning that one of the applicants was the public relations manager of that Church; various articles relating to criminal proceedings concerning an applicant; and articles relating the conviction of another applicant for having sexually aggressed minors.

The Conseil d’Etat further explained that in order to rule on these claims, it has deemed necessary to answer a number of questions “raising serious issues with regard to the interpretation of European law in the light of the European Court of Justice’s judgment in its Google Spain case.

Such issues are in relation with the obligations applying to the operator of a search engine with regard to web pages that contain sensitive data, when collecting and processing such information is illegal or very narrowly framed by legislation, on the grounds of its content relating to sexual orientations, political, religious or philosophical opinions, criminal offences, convictions or safety measures. On that point, the cases brought before the Conseil d’Etat raise questions in close connection with the obligations that lie on the operator of a search engine, when such information is embedded in a press article or when the content that relates to it is false or incomplete”.

***

Find what you’re reading useful? Please consider supporting pdpecho.

CJEU: CCTV camera in family home falls under the Data protection directive, but it is in principle lawful

CJEU gave its decision today in Case C-212/13 František Ryneš – under the preliminary ruling procedure. The press release is available here and the decision here.

Facts

A person who broke the window of the applicant’s home and was identified by the police with the help of the applicant’s CCTV camera complained that the footage was in breach of data protection law, as he did not give consent for that processing operation. The Data Protection Authority fined the applicant, and the applicant challenged the DPAs decision in front of an administrative court. The administrative court sent a question for a preliminary ruling to the CJEU.

Video image is personal data

First, the Court established that “the image of a person recorded by a camera constitutes personal data because it makes it possible to identify the person concerned” (para. 22).

In addition, video surveillance involving the recording and storage of personal data falls within the scope of the Directive, since it constitutes automatic data processing.

Household exception must be “narrowly construed”

According to the Court, as far as the provisions of the Data protection directive govern the processing of personal data liable to infringe fundamental freedoms, they “must necessarily be interpreted in the light of the fundamental rights set out in the Charter (see Google Spain and Google, EU:C:2014:317, paragraph 68)”, and “the exception provided for in the second indent of Article 3(2) of that directive must be narrowly construed” (para. 29).

In this sense, the Court emphasized the use of the word “purely” in the legal provision for describing the personal or household activity under this exception (para. 30).

Such processing operation is most likely lawful

In one of the last paragraphs of the decision, the Court clarifies that “the application of Directive 95/46 makes it possible, where appropriate, to take into account — in accordance, in particular, with Articles 7(f), 11(2), and 13(1)(d) and (g) of that directive — legitimate interests pursued by the controller, such as the protection of the property, health and life of his family and himself, as in the case in the main proceedings” (para. 34).

This practically means that, even if the household exception does not apply in this case, and the processing operation must comply with the requirements of the Data protection directive, these requirements imply that a CCTV camera recording activity such as the one in the proceedings is lawful.

NB: The Court used a non-typical terminology in this decision – “the right to privacy” (para. 29)

Court of Justice of the EU: Member States are not obliged to provide for exceptions in the application of data subjects’ rights

The Court of Justice of the European Union ruled on November 7, in Case C-473/12 IPI v. Geofrey Engelbert, that Article 13(1) of Directive 95/46, providing for exceptions in the application of the rights of the data subjects, “must be interpreted as meaning that Member States have no obligation, but have the option, to transpose into their national law one or more of the exceptions which it lays down to the obligation to inform data subjects of the processing of their data”.

Article 13(1) has the following content:

Member States may adopt legislative measures to restrict the scope of the obligations and rights provided for in Articles 6(1), 10, 11(1), 12 and 21 when such a restriction constitutes a necessary measures to safeguard:

(a)      national security;                                                               (b)      defence;                                                                                 (c)      public security;                                                                           (d)      the prevention, investigation, detection and prosecution of criminal offences, or of breaches of ethics for regulated professions;                                                                                          (e)      an important economic or financial interest of a Member State or of the European Union, including monetary, budgetary and taxation matters;                                                                                    (f)      a monitoring, inspection or regulatory function connected, even occasionally, with the exercise of official authority in cases referred to in (c), (d) and (e);                                                           (g)      the protection of the data subject or of the rights and freedoms of others.’

It must also be noted that these exemptions apply to:                                                        – the principles relating to data quality enshrined in Article 6(1) of the Directive;  –  the right to information, in Articles 10 and 11(1) of the Directive;                           – the right to access personal data, enshrined in Article 12 of the Directive, a provision which also contains the right to rectification, erasure and blocking of data (Article 13(2));                                                                                                                             -publicizing of processing operations, enshrined in Article 21 of the Directive.

By identity of reason, one can conclude that the decision of the Court in IPI v. Engelbert, applies also to the other provisions to which Article 13(1) refers, not only to Articles 10 and 11. The latter were relevant in this particular case.

The conclusion of the Court is rather interesting. It is a well known fact that “Directive 95/46 amounts to harmonisation which is generally complete“, as the Court itself notes in para. 31 of the Ipi v. Engelbert decision, citing Case C‑101/01 Lindqvist[2003] ECR I‑12971, paragraphs 95 and 96, and Huber, paragraphs 50 and 51. How does the idea of non-compulsory exemptions and restrictions provided for in Directive 95/46 fall within the concept of “generally complete harmonisation”?

To justify this approach, the Court added in para. 31 that “the provisions of Directive 95/46 are necessarily relatively general given that it has to be applied to a large number of very different situations, and that the directive includes rules with a degree of flexibility and, in many instances, leaves to the Member States the task of deciding the details or choosing between options”, citing Lindqvist, para. 83.

The most compelling reason for the Court to decide so must have been an argument it brought in para. 28 of the IPI Decision: “It is apparent from recitals 3, 8 and 10 of Directive 95/46 that the European Union legislature sought to facilitate the free movement of personal data by the approximation of the laws of the Member States while safeguarding the fundamental rights of individuals, in particular the right to privacy, and ensuring a high level of protection in the European Union.”

It appears that the Court is more likely to interpret the provisions of Directive 95/46 through the “high level of protection” criterion, rather than the “generally complete harmonization” one.

The IPI Decision raises several questions:

* Do Member States have the liberty to provide for no exemptions and restrictions derived from Article 13(1) of the Directive at all? 

*If this is not the case, what are the criteria to decide which are the minimum exceptions that must be regulated?

*Is the list of exemptions and restrictions enshrined in Article 13(1) of the Directive limited? In other words, taking into account that the generally complete harmonisation allows Article 13(1) to be interpreted in a flexible manner, can the state provide for additional exceptions?

One last remark is that the question of exemptions and restrictions of data protection law is sensitive, only if one takes into account the national security exception often invoked for interfering with the privacy of electronic communications. In this regard, see also THIS older post on pdpEcho.

Why be upset?! National security exemptions for personal data processing are all over the EU data protection legal framework

The Rapporteur for the EU Data Protection Regulation in the European Parliament, MEP Jan Philipp Albrecht, relesead today a concise and clear opinion on the link between US Surveillance leaks and the ongoing reform process of the EU data protection reform.

Among other comments, he also underlined that “The leaks hit the public in the middle of ongoing negotiations and debates in the European Parliament on the Data Protection Regulation. The draft of this regulation, sent in November 2011 by Justice Commissioner Viviane Reding to her colleagues, already contained a provision that would make it a condition for the disclosure of user data to authorities in third countries to have a legal foundation such as a mutual legal assistance agreement and an authorisation by the competent data protection authority.This Article disappeared after strong lobbying from the US administration, and only a very weak Recital remained.” Which is a valid point. You can read all of his statement HERE.

My problem with this debate in general is that, legally speaking, if the state in this mass surveillance revelations were a EU member state, and not the US, we (EU citizens) could have little to argue against it based on current (and future, for that matter) EU law. Article 3(2) of Directive 95/46 on the protection of personal data states that:

2. This Directive shall not apply to the processing of personal data:

– in the course of an activity which falls outside the scope of Community law, such as those provided for by Titles V and VI of the Treaty on European Union and in any case to processing operations concerning public security, defence, State security (including the economic well-being of the State when the processing operation relates to State security matters) and the activities of the State in areas of criminal law.

A similar provision exists in the proposed draft Regulation, at art. 2:

This Regulation does not apply to the processing of personal data:
(a) in the course of an activity which falls outside the scope of Union law, in particular concerning national security;

You could argue that Directive 95/46 is the framework Directive (applying only on matters which used to fall under the former first pillar of the communities) and that in criminal law matters (the former third pillar) the current EU legal framework is defined by Council Framework Decision 2008/977/JHA. And indeed this is true. However, the material scope of the Decision is defined as follows, in art. 1:

4. This Framework Decision is without prejudice to essential national security interests and specific intelligence activities in the field of national security.

And if you think that in the proposed directive for data processing in criminal matters, which will replace the framework decision, the national security rule is sweetened in favor of the data subject with additional safeguards, think again (and read art. 2):

3.           This Directive shall not apply to the processing of personal data:

(a)     in the course of an activity which falls outside the scope of Union law, in particular concerning national security;

But, you would say, these are only secondary sources of EU law. We could look higher for protection. We have a fundamental right to private life and a fundamental right to the protection of personal data, guaranteed in the European Charter of Fundamental Rights, which from December 1, 2009, has binding effect on the EU Member States. That is also true. However, the scope of the Charter, according to art. 51, is limited to situations in which Member States are implementing Union law (such as transposing a directive, applying the resulted national law, or applying a regulation). Moreover, to make things clearer,  art. 51(2) provides that “this Charter does not establish any new power or task for the Community or the Union, or modify powers and tasks defined by the Treaties”. And national security measures of a Member State are definitely outside the powers of the EU. So, even if the institutional system of the EU goes upside down and we would be able to file complaints directly to the Court of Justice of the European Union, as individuals, the Court would have little to say about the conformity of such surveillance practices with the Charter.

What to do then? We should leave the EU system of protection and look towards the one created by the Council of Europe. Article 8 of the European Convention on Human Rights protects the right to respect for private life. However, Article 8(2) states that:

There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.”

The national security exemption, all over again. But don’t get too disappointed. The ECHR, at least from what I’ve read in their up to date case-law on Article 8, would never find mass surveillance a proportionate measure, and hence would never declare it as necessary in a democratic society. In fact, there are several decisions made by the ECHR against CoE member states in the context of their intelligence activity and its clash with art. 8 of the Convention (see, for instance, Rotaru v. Romania).

Great. But how could you get your case in front of the ECHR? First, you would have to file a complaint against the institution which breaches your fundamental right to private life in one of your national courts, basing your claim in a national provision. Only if your national court does not give a favorable decision, and after that you exhaust all the national judicial review possibilities, you would be able to go to the ECHR and complain that your state has not respected your fundamental right to private life. If the ECHR finds in your favor, then you would probably be compensated with an amount of money (which usually does not exceed 10.000 euro). But that would only be your individual case. There are no class cases before the ECHR. And there is no competence of the ECHR to invalidate a national law. A change in the national law could happen only if the state will want to make it. Thus, it is difficult to predict whether it would happen or not. And the whole process I described usually lasts several years (4-5-6). 

Oh, remember, the whole analysis from above was made considering the state with mass surveillance habits is a member of EU and a member of CoE! If it is a third country and if it operates trough legal persons under its own jurisdiction and while only your data find themselves in an extraterritorial position, then, legally speaking, your actual actions are most likely “frozen”.  {This is why clouds must be approached by themselves, from a regulatory perspective, establishing their own architecture as a territory to be subject to a certain law. But even if such an idealistic thing would happen, national security (just like that, without further safeguards or proportionality provisions) is always an exception. The analysis we went through together showed that this kind of mass surveillance can be sanctioned only for not being proportional with the aim it pursues. But for that to happen, we would need a court to decide so. A recognized court by all the parties involved, which can make enforceable decisions in such a context. Global governance sounds all of a sudden more interesting and ever closer to you, doesn’t it?}

A comment

It is important to note that the national security exemptions in data protection law, as long as the intrusions are proportionate and necessary in a democratic society, are accepted by the people as part of their social contract with their state. What makes the people (at least in Europe) uncomfortable about the whole Prism story is that the processing of their data under the national security exemption is performed by a state with whom they do not have a social contract. What are they getting back in exchange for their privacy? They look at their “states” for protection (by which I mean the national state and EU), but which are the mechanisms for their states to afford such a protection in the international law paradigm?

Conclusion? 

Should the national security exemption be reconsidered, especially with regard to surveillance? Should it be made subject to safeguards such as proportionality embedded in the law? Is that too dangerous? Or is that necessary to protect personal freedom? Should such rules be constitutionalized? And if so, at what level should them be constitutionalized? And which court or which other mechanism should safeguard its “constitutionality”? I think this can be the effective part of the debate we should have after the recent developments. And we should also work on finding better questions to answer within this debate.

(Source of the photo: http://3.bp.blogspot.com)

“Purpose limitation”, explained by the Article 29 WP

On April 2, Article 29 WP published its Opinion on “purpose limitation”, one of the safeguards which make data protection efficient in Europe.

Purpose limitation protects data subjects by setting limits on how data controllers are able to use their data while also offering some degree of flexibility for data controllers. The concept of purpose limitation has two main building blocks: personal data must be collected for ‘specified, explicit and legitimate’ purposes (purpose specification) and not be ‘further processed in a way incompatible’ with those purposes (compatible use).

Further processing for a different purpose does not necessarily mean that it is incompatible:
compatibility needs to be assessed on a case-by-case basis. A substantive compatibility assessment requires an assessment of all relevant circumstances. In particular, account should be taken of the following key factors:

– the relationship between the purposes for which the personal data have been collected and the purposes of further processing;
– the context in which the personal data have been collected and the reasonable expectations of the data subjects as to their further use;
– the nature of the personal data and the impact of the further processing on the data subjects;
– the safeguards adopted by the controller to ensure fair processing and to prevent any undue impact on the data subjects.

Conclusions of the Opinion:

First building block: ‘specified, explicit and legitimate’ purposes

With regard to purpose specification, the WP29 highlights the following key considerations:

 Purposes must be specific. This means that – prior to, and in any event, no later than the time when the collection of personal data occurs – the purposes must be precisely and fully identified to determine what processing is and is not included within the specified purpose and to allow that compliance with the law can be assessed and data protection
safeguards can be applied.

 Purposes must be explicit, that is, clearly revealed, explained or expressed in some form in order to make sure that everyone concerned has the same unambiguous understanding of the purposes of the processing irrespective of any cultural or linguistic diversity. Purposes may be made explicit in different ways.

 There may be cases of serious shortcomings, for example where the controller fails to specify the purposes of the processing in sufficient detail or in a clear and unambiguous language, or where the specified purposes are misleading or do not correspond to reality. In any such situation, all the facts should be taken into account to determine the actual purposes, along with the common understanding and reasonable expectations of the data subjects based on the context of the case.

 Purposes must be legitimate. Legitimacy is a broad requirement, which goes beyond a simple cross-reference to one of the legal grounds for the processing referred to under Article 7 of the Directive. It also extends to other areas of law and must be interpreted within the context of the processing. Purpose specification under Article 6 and the requirement to have a lawful ground for processing under Article 7 of the Directive are two separate and cumulative requirements.

 If personal data are further processed for a different purpose
– the new purpose/s must be specified (Article 6(1)(b)), and
– it must be ensured that all data quality requirements (Articles 6(1)(a) to (e)) are also
satisfied for the new purposes.

Second building block: compatible use
 Article 6(1)(b) of the Directive also introduces the notions of ‘further processing’ and ‘incompatible’ use. It requires that further processing must not be incompatible with the purposes for which personal data were collected. The prohibition of incompatible use sets a limitation on further use. It requires that a distinction be made between further use that is ‘compatible’, and further use that is ‘incompatible’, and therefore, prohibited.

 By prohibiting incompatibility rather than requiring compatibility, the legislator seems to give some flexibility with regard to further use. Further processing for a different purpose does not necessarily and automatically mean that it is incompatible, as compatibility needs to be assessed on a case-by-case basis.

 In this context, the WP29 emphasises that the specific provision in Article 6(1)(b) of the Directive on ‘further processing for historical, statistical or scientific purposes’ should be seen as a specification of the general rule, while not excluding that other cases could also be considered as ‘not incompatible’. This leads to a more prominent role for different kinds of safeguards, including technical and organisational measures for functional separation, such as full or partial anonymisation, pseudonymisation, aggregation of data, and privacy enhancing technologies.

The Opinion is available HERE.

DP fundamentals: Few facts on Information and Access

One of the concrete data protection rights individuals enjoy in Europe are the right to access data collected on them and the right to be informed about the processing of their data.

These rights are provided under Articles 10, 11 and 12 of the Directive 95/46. However, a great emphasis is made on Article 12, which contains both the right to access and the right to confirmation of undergoing processing of personal data by a certain processor or operator.

Prof. Christopher Kuner writes in one of his books that “The rights granted to data subjects under Article 12 can present substantial difficulties for companies. First, given the distributed nature of computing nowadays, personal data may be contained in a variety of databases located in different geographic regions, so that it can be difficult to locate all the data necessary to respond to a data subject’s request. Indeed locating all the data pertaining to a particular data subject in order to allow him to know what data are being held about him to assert his rights of erasure, blockage etc. may require the data controller to comb through masses of data contained in various databases, which in itself could lead to data protection risks”.

He also writes that another source of problems with complying with Art. 12 is that Member States have transposed differently this provision with regard to the costs of access and the number of times it can be exercised. “For instance, in Finland the data controller may charge its costs in accessing the data and requests by data subjects are limited at one per year, while in UK the controller may charge a fee of up to 10 pounds for access to each entry and reasonable time must elapse between requests. This disharmony of the law creates problems for data controllers that process data of data subjects from different Member States.”

Source: Christopher Kuner, European Data Privacy Law and Online Business, Oxford University Press, 2003 (p. 71, 72)

You can find the book here:

European Data Privacy Law and Online Business