What’s new in research: Georgetown Law Technology Review, human rights and encryption, and data protection proof free-trade agreements (open access)

I’m starting this week’s “What’s new in research” post with three good news:

  • There is a new technology law journal in town – Georgetown Law Technology Review, which was just launched. It provides full access to its articles, notes and comments. “Few issues are of greater need for careful attention today than the intersection of law and technology“, writes EPIC’s Marc Rotenberg welcoming the new Review.
  • Tilburg Institute for Law, Technology and Society (TILT) launched its Open call for Fellowships Applications for the 2017-2018 academic year. “This programme is for internationally renowned senior scholars who wish to spend the 2017- 2018 academic year, or a semester, in residence at TILT as part of its multi-disciplinary research team to work on some of the most interesting, challenging and urgent issues relating to emerging and disruptive technologies.” I spent three months at TILT in 2012, as a visiting researcher, during my PhD studies. I highly recommend this experience – it’s one of the best environments there are to develop your research in the field of data protection/privacy.

 

livres4

As for the weekend reads proposed this week, they tackle hot topics: human rights and encryption from a global perspective, international trade agreements and data protection from the EU law perspective, newsworthiness and the protection of privacy in the US.  

 

  1. Human rights and encryption, by Wolfgang Schultz and Joris van Hoboken, published by UNESCO.

“This study focuses on the availability and use of a technology of particular significance in the field of information and communication: encryption, or more broadly cryptography. Over the last decades, encryption has proven uniquely suitable to be used in the digital environments. It has been widely deployed by a variety of actors to ensure protection of information and communication for commercial, personal and public interests. From a human rights perspective, there is a growing recognition that the availability and deployment of encryption by relevant actors is a necessary ingredient for realizing a free and open internet. Specifically, encryption can support free expression, anonymity, access to information, private communication and privacy. Therefore, limitations on encryption need to be carefully scrutinized. This study addresses the relevance of encryption to human rights in the media and communications field, and the legality of interferences, and it offers recommendations for state practice and other stakeholders.”

2. “Trade and Privacy: Complicated Bedfellows? How to Achieve Data Protection-Proof Free Trade Agreements“, by Kristina Irion, Svetlana Yakovleva, Marija Bartl, a study commissioned by the European Consumer Organisation/Bureau Européen des Unions de Consommateurs (BEUC), Center for Digital Democracy (CDD), The Transatlantic Consumer Dialogue (TACD) and European Digital Rights (EDRi).

“This independent study assesses how EU standards on privacy and data protection are safeguarded from liberalisation by existing free trade agreements (the General Agreement of Trade in Services (GATS) and the Comprehensive Economic and Trade Agreement (CETA)) and those that are currently under negotiation (the Trans-atlantic Trade and Investment Partnership (TTIP) and the Trade in Services Agreement (TiSA)). Based on the premise that the EU does not negotiate its privacy and data protection standards, the study clarifies safeguards and risks in respectively the EU legal order and international trade law. In the context of the highly-charged discourse surrounding the new generation free trade agreements under negotiation, this study applies legal methods in order to derive nuanced conclusions about the preservation of the EU’s right to regulate privacy and the protection of personal data.”

3. “Making News: Balancing Newsworthiness and Privacy in the Age of Algorithms, by Erin C. Caroll, published by the Georgetown University Law Center.

“In deciding privacy lawsuits against media defendants, courts have for decades deferred to the media. They have given it wide berth to determine what is newsworthy and so, what is protected under the First Amendment. And in doing so, they have often spoken reverently of the editorial process and journalistic decision-making.

Yet, in just the last several years, news production and consumption has changed dramatically. As we get more of our news from digital and social media sites, the role of information gatekeeper is shifting from journalists to computer engineers, programmers, and app designers. The algorithms that the latter write and that underlie Facebook, Twitter, Instagram, and other platforms are not only influencing what we read but are prompting journalists to approach their craft differently.

While the Restatement (Second) of Torts says that a glance at any morning newspaper can confirm what qualifies as newsworthy, this article argues that the modern-day corollary (which might involve a glance at a Facebook News Feed) is not true. If we want to meaningfully balance privacy and First Amendment rights, then courts should not be so quick to defer to the press in privacy tort cases, especially given that courts’ assumptions about how the press makes newsworthiness decisions may no longer be accurate. This article offers several suggestions for making better-reasoned decisions in privacy cases against the press.”

Enjoy the reads and have a nice weekend!

***

Find what you’re reading useful? Please consider supporting pdpecho.

 

 

 

 

Greek judges asked the CJEU if they should dismiss evidence gathered under the national law that transposed the invalidated Data Retention Directive

Here is a new case at the Court of Justice of the EU that the data protection world will be looking forward to, as it addresses questions about the practical effects of the invalidation of the Data Retention Directive.

old_bailey_microcosm

(licensed under Creative Commons)

Case C-475/16 K. (yes, like those Kafka characters) concerns criminal proceedings against K. before Greek courts, which apparently involve evidence gathered under the Greek national law that transposed the now-invalidated Data Retention Directive. The Directive was invalidated in its entirety by the CJEU in 2014, after the Court found in its Digital Rights Ireland judgment that the provisions of the Directive breached Articles 7 (right to respect for private life) and 8 (right to the protection of personal data) of the Charter of Fundamental Rights.

The Greek judges sent in August a big set out questions for a preliminary ruling to the CJEU (17 questions). Among those, there are a couple of very interesting ones, because they deal with the effects in practice of the invalidation of an EU Directive and what happens with national laws of the Member States that transposed the Directive.

For instance, the national judge asks whether national courts are obliged not to apply legislative measures transposing the annulled Directive and whether this obligation also means that they must dismiss evidence obtained as a consequence of those legislative measures (Question 3). The national judge also wants to know if maintaining the national law that transposes an invalidated Directive constitutes an obstacle to the establishment and functioning of the internal market (Question 16).

Another question raised by the national judge is whether the national legislation that transposed the annulled Data Retention Directive and that remained in force at national level after the annulment is still considered as falling under the scope of EU law (Question 4). The answer to this question is important because the EU Charter and the supremacy of EU law do not apply to situations that fall outside the scope of EU law.

The Greek judge didn’t miss the opportunity to also ask about the effect on the national law transposing the Data Retention Directive of the fact that this Directive was also enacted to implement a harmonised framework at the European level under Article 15(1) of the ePrivacy Directive (Question 5). The question is whether this fact is enough to bring the surviving national data retention laws under the scope of EU law.

As long as the Charter will be considered applicable to the facts of the case, the national judge further wants to know whether national law that complies partly with the criteria set out in the Digital Rights Ireland decision still breaches Articles 7 and 8 of the Charter because it doesn’t comply with all of it (Question 13). For instance, the national judge estimates that the national law doesn’t comply with the request that the persons whose data are retained must be at least indirectly in a situation which is liable to give rise to criminal prosecutions (para 58 DRI), but it complies with the request that the national law must contain substantive and procedural conditions for the access of competent authorities to the retained data and objective criteria by which the number of persons authorised to access these data is limited to what is strictly necessary (paras 61, 62 DRI).

Lastly, it will be also interesting to see whether the Court decides to address the issue of what “serious crime” means in the context of limiting the exercise of fundamental rights (Questions 10 and 11).

If you would like to dwell into some of these topics, have a look at the AG Opinion in the Tele2Sverige case, published on 19 July 2016. The judgment in that case is due on 21 December 2016. Also, have a look at this analysis of the Opinion.

As for a quick “what to expect” in the K. case from my side, here it is:

  • the CJEU will seriously re-organise the 17 questions and regroup them in 4 to 5 topics, also clarifying that it only deals with the interpretation of EU law, not national law or facts in national proceedings;
  • the national laws transposing the Data Retention Directive will probably be considered as being in the field of EU law – as they also regulate within the ambit of the ePrivacy Directive;
  • the Court will restate the criteria in DRI and probably clarify that all criteria must be complied with, no exceptions, in order for national measures to comply with the Charter;
  • the CJEU will probably not give indications to the national courts on whether they should admit or dismiss evidence collected on the bases of national law that does not comply with EU law – it’s too specific and the Court is ‘in the business’ of interpreting EU law; the best case scenario, which is possible, is that the Court will give some guidance on the obligations of Member States (and hopefully their authorities) regarding the effects of their transposing national laws when relevant EU secondary law is annulled;
  • as for what “serious crime” means in the context of limiting fundamental rights, let’s see about that. Probably the Court will give useful guidance.

***

Find what you’re reading useful? Please consider supporting pdpecho.

What’s new in research: full-access papers on machine learning with personal data, the ethics of Big Data as a public good

Today pdpecho inaugurates a weekly post curating research articles/papers/studies or dissertations in the field of data protection and privacy, that are available under an open access regime and that were recently published.

This week there are three recommended pieces for your weekend read. The first article, published by researchers from Queen Mary University of London and Cambridge University, provides an analysis of the impact of using machine learning to conduct profiling of individuals in the context of the EU General Data Protection Regulation.

The second article is the view of a researcher specialised in International Development, from the University of Amsterdam, on the new trend in humanitarian work to consider data as a public good, regardless of whether it is personal or not.

The last paper is a draft authored by a law student at Yale (published on SSRN), which explores an interesting phenomenon: how data brokers have begun to sell data products to individual consumers interested in tracking the activities of love interests, professional contacts, and other people of interest. The paper underlines that the US privacy law system lacks protection for individuals whose data are sold in this scenario and proposes a solution.

1) Machine Learning with Personal Data (by Dimitra Kamarinou, Christopher Millard, Jatinder Singh)

“This paper provides an analysis of the impact of using machine learning to conduct profiling of individuals in the context of the EU General Data Protection Regulation.

We look at what profiling means and at the right that data subjects have not to be subject to decisions based solely on automated processing, including profiling, which produce legal effects concerning them or significantly affect them. We also look at data subjects’ right to be informed about the existence of automated decision-making, including profiling, and their right to receive meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing.

The purpose of this paper is to explore the application of relevant data protection rights and obligations to machine learning, including implications for the development and deployment of machine learning systems and the ways in which personal data are collected and used. In particular, we consider what compliance with the first data protection principle of lawful, fair, and transparent processing means in the context of using machine learning for profiling purposes. We ask whether automated processing utilising machine learning, including for profiling purposes, might in fact offer benefits and not merely present challenges in relation to fair and lawful processing.”

The paper was published as “Queen Mary School of Law Legal Studies Research Paper No. 247/2016″.

“International development and humanitarian organizations are increasingly calling for digital data to be treated as a public good because of its value in supplementing scarce national statistics and informing interventions, including in emergencies. In response to this claim, a ‘responsible data’ movement has evolved to discuss guidelines and frameworks that will establish ethical principles for data sharing. However, this movement is not gaining traction with those who hold the highest-value data, particularly mobile network operators who are proving reluctant to make data collected in low- and middle-income countries accessible through intermediaries.

This paper evaluates how the argument for ‘data as a public good’ fits with the corporate reality of big data, exploring existing models for data sharing. I draw on the idea of corporate data as an ecosystem involving often conflicting rights, duties and claims, in comparison to the utilitarian claim that data’s humanitarian value makes it imperative to share them. I assess the power dynamics implied by the idea of data as a public good, and how differing incentives lead actors to adopt particular ethical positions with regard to the use of data.”

This article is part of the themed issue ‘The ethical impact of data science’ in “Philosophical transactions of the Royal Society A”.

3) What Happens When an Acquaintance Buys Your Data?: A New Privacy Harm in the Age of Data Brokers (by Theodore Rostow)

Privacy scholarship to date has failed to consider a new development in the commercial privacy landscape. Data brokers have begun to sell data products to individual consumers interested in tracking the activities of love interests, professional contacts, and other people of interest. This practice creates an avenue for a new type of privacy harm — “insider control” — which privacy scholarship has yet to recognize.

U.S. privacy laws fail to protect consumers from the possibility of insider control. Apart from two noteworthy frameworks that might offer paths forward, none of the viable reforms offered by privacy scholars would meaningfully limit consumers’ vulnerability. This Note proposes changes to existing privacy doctrines in order to reduce consumers’ exposure to this new harm.”

This paper was published as a draft on SSRN. According to SSRN, the final version will be published in the 34th volume of the Yale Journal on Regulation.

***

Find what you’re reading useful? Please consider supporting pdpecho.

Even if post Brexit-UK adopts the GDPR, it will be left without its “heart”

Gabriela Zanfir Fortuna

brexit

There has been lately a wave of optimism of those looking for legal certainty that the GDPR will be adopted by the UK even after the country leaves the European Union. This wave was prompted by a declaration of the British Secretary of State, Karen Bradley, at the end of October, when she stated before a Committee of the Parliament that “We will be members of the EU in 2018 and therefore it would be expected and quite normal for us to opt into the GDPR and then look later at how best we might be able to help British business with data protection while maintaining high levels of protection for members of the publicThe information commissioner of the UK, Elisabeth Denham, welcomed the news. On another hand, as Amberhawk explained in detail, this will not mean that the UK will automatically be considered as ensuring an adequate level of protection.

The truth is that as long as the UK is still a Member of the EU, it can’t opt in or opt out, for that matter, from regulations (other than the ones subject to the exemptions negotiated by the UK when it entered the Union – but this is not the case for the GDPR). They are “binding in their entirety” and “directly applicable”, according to Article 288 of the Treaty on the Functioning of the EU. So, yes, quite normally, if the UK is still a Member State of the EU on 25 May 2018, then the GDPR will start applying in the UK just as it will be applying in Estonia or France.

The fate of the GDPR after Brexit becomes effective will be as uncertain as the fate of all other EU legislative acts transposed in the UK or directly applicable in the UK. But let’s imagine the GDPR will remain national law after Brexit, in a form or another. If this happens, it is likely that it will take a life of its own, departing from harmonised application throughout the EU. First and foremost, the GDPR in the UK will not be applied in the light of the Charter of Fundamental Rights of the EU and especially its Article 8 – the right to the protection of personal data. The Charter played an extraordinary role in the strengthening of data protection in the EU after it became binding, in 2009, being invoked by the Court of Justice of the EU in its landmark judgments – Google v Spain,  Digital Rights Ireland and Schrems.

The Court held as far back as 2003 that “the provisions of Directive 95/46, in so far as they govern the processing of personal data liable to infringe fundamental freedoms, in particular the right to privacy, must necessarily be interpreted in the light of fundamental rights” (Österreichischer Rundfunk, para 68). This principle was repeated in most of the following cases interpreting Directive 95/46 and other relevant secondary law for this field, perhaps with the most notable results in Digital Rights Ireland and Schrems. 

See, for instance:

“As far as concerns the rules relating to the security and protection of data retained by providers of publicly available electronic communications services or of public communications networks, it must be held that Directive 2006/24 does not provide for sufficient safeguards, as required by Article 8 of the Charter, to ensure effective protection of the data retained against the risk of abuse and against any unlawful access and use of that data” (Digital Rights Ireland, para. 66).

“As regards the level of protection of fundamental rights and freedoms that is guaranteed within the European Union, EU legislation involving interference with the fundamental rights guaranteed by Articles 7 and 8 of the Charter must, according to the Court’s settled case-law, lay down clear and precise rules governing the scope and application of a measure and imposing minimum safeguards, so that the persons whose personal data is concerned have sufficient guarantees enabling their data to be effectively protected against the risk of abuse and against any unlawful access and use of that data. The need for such safeguards is all the greater where personal data is subjected to automatic processing and where there is a significant risk of unlawful access to that data” (Schrems, para. 91).

Applying data protection law outside the spectrum of fundamental rights will most likely not ensure sufficient protection to the person. While the UK will still remain under the legal effect of the European Convention of Human Rights and its Article 8 – respect for private life – this by far does not equate to the specific protection ensured to personal data by Article 8 of the Charter as interpreted and applied by the CJEU.

Not only the Charter will not be binding for the UK post-Brexit, but the Court of Justice of the EU will not have jurisdiction anymore on the UK territory (unless some sort of spectacular agreement is negotiated for Brexit). Moreover, EU law will not enjoy supremacy over national law, as there is the case right now. This means that the British data protection law will be able to depart from the European standard (GDPR) to the extent desirable by the legislature. For instance, there will be nothing staying in the way of the British legislature to adopt permissive exemptions to the rights of the data subject, pursuant to Article 23 GDPR.

So when I mentioned in the title that the GDPR in the post-Brexit UK will in any case be left without its “heart”, I was referring to its application and interpretation in the light of the Charter of the Fundamental Rights of the EU.

***

Find what you’re reading useful? Please consider supporting pdpecho.

Interested in the GDPR? See the latest posts:

CNIL just published the results of their GDPR public consultation: what’s in store for DPOs and data portability? (Part I)

CNIL’s public consultation on the GDPR: what’s in store for Data Protection Impact Assessments and certification mechanisms? (Part II)

The GDPR already started to appear in CJEU’s soft case-law (AG Opinion in Manni)

CNIL’s public consultation on the GDPR: what’s in store for Data Protection Impact Assessments and certification mechanisms? (Part II)

Gabriela Zanfir Fortuna

The French Data Protection Authority, CNIL, made public last week the report of the public consultation it held between 16 and 19 July 2016 among professionals about the General Data Protection Regulation (GDPR). The public consultation gathered 540 replies from 225 contributors.

The main issues the CNIL focused on in the consultation were four:

  • the data protection officer;
  • the right to data portability;
  • the data protection impact assessments;
  • the certification mechanism.

These are also the four themes in the action plan of the Article 29 Working Party for 2016.

This post summarises the results and action plan for the last two themes. If you want to read about the results on the data protection officer and the right to data portability, check out Part I of this post. [Disclaimer: all quotations are translated from French].

1) On data protection impact assessments (DPIAs)

Article 35 GDPR obliges data controllers to carry out an assessment of the impact of the envisaged processing operations on the protection of personal data prior to the processing, if it is likely to result in a high risk to the rights and freedoms of natural persons, taking into account the nature, scope, context and purposes of the processing, and in particular where that processing uses new technologies. According to Article 35(3), the supervisory authorities must make public a list of the kind of processing operations which are subject to this requirement.

Article 35(3) provides that there are three cases where DPIAs must be conducted:

a) a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing (including profiling);

b) where processing on a large scale sensitive data (e.g. health data, data disclosing race, political options etc.);

c) a systematic monitoring of a publicly accessible area on a large scale.

According to the report, the DPIA emerges as a dynamic compliance tool, which contributes to maintain data security, reduce the risks of processing, determine the suitable safeguards, prevent legal deficiencies and better implement Privacy by Design and Privacy by Default (p. 17). It was deemed by participants as a “new and useful tool”.

There were three main category of questions raised by the participants to the public consultation:

  • When do controllers have to conduct a DPIA?
  • How to conduct a DPIA?
  • Who does what within the work for a DPIA?

The respondents requested the supervisory authority to be active in helping them prepare for DPIAs – “to clarify everything that is unclear, to involve companies [in discussions], to provide criteria and examples” and to help harmonise the criteria at European level (p. 14).

Several particular cases were brought up by the respondents, such as processing of HR data, processing of data by websites, processing of data by public administration or by hospitals. These scenarios raised questions such as: does the term “large scale” only refer to Big Data? Does it refer to the volume of data that will be processed or to the number of people whose data will be processed? Are “new technologies” all the technologies that are used for the first time by a controller? Is behavioural advertising “profiling” in the sense of the GDPR? (p. 14).

The participants also wanted to know whether a DPIA should be conducted as well for those processing operations that are already in place and that would qualify for one of the “compulsory” cases that require a DPIA.

As for the methodological approach, the respondents asked for a simple method. They also referred to other existent tools that could be used, such as ISO 29134 and EBIOS. In any case, they suggested that the method should be tested with controllers and should be harmonised at European level. There were also questions whether professional associations could create their own methodology for DPIAs based on sectors of activity (p. 15).

The conclusion of the CNIL was that the contributions to the public consultation showed a great need for clarification, but also revealed “interesting ideas” for the implementation of the DPIA requirements, which will be taken into account. The most difficult points revealed are the criteria to be taken into account when deciding if a DPIA must be conducted, the harmonisation of methodologies at European level and the prior consultation of supervisory authorities (p. 17).

The immediate action plan refers to guidance from the Article 29 Working Party on DPIA and what constitutes “high risks”, which will provide interpretations to vague requirements. The CNIL also aims to make some steps by themselves, such as updating their current guidance for Privacy Impact Assessments.

4) On the certification mechanism

Article 42 of the GDPR provides that the establishment of data protection certification mechanisms and of data protection seals and marks, for the purpose of demonstrating compliance with the Regulation of processing operations by controllers and processors shall be “encouraged” by Member States, DPAs, the European Data Protection Board and the European Commission. Article 42(3) clarifies that the certification is voluntary and must be available via a transparent process.

Surprisingly, the “certification” part of the public consultation was the one that provided more plain suggestions than questions, compared to the other three, as is apparent from the report. On another hand, the contributions seem to be smaller in volume, provided this indeed is a novel topic for the data protection world.

One of the questions dealt with in the consultation was “who should issue certifications/labels”? The respondents preferred the option of a certification issued at European level and only in the absence of such a possibility, a certification issued at national level that should be mutually recognised. They also underlined that the coexistence of certifications issued by DPAs and certifications issued by certification bodies will be difficult. Participants to the consultation suggested that drafting of standards should be carried out by regulators in consultation with companies and the future evaluators, with a view to homogenise the practices of the different certification bodies (p. 11).

To the question of what should be certified or labeled with priority, the respondents provided a list of suggestions (p. 11):

  • online products and services processing health data;
  • the solutions to monitor/surveil databases;
  • the services provided by the state;
  • anonymisation techniques;
  • search engines;
  • social media platforms.

As to which are the specific needs of small and medium enterprises, the replies referred to support for filing the requests for certification, the need of reduced costs and the need of a simple methodology (p. 12).

Another topic discussed was how to retrieve a label or a certification in case of misconduct – proposals ranged from creating an “alarm system” to signal non-compliance with the certification, to having an effective withdrawal after an adversarial procedure with a formal notice to the certification body, which could propose a corrective plan during the procedure (p. 12).

Finally, the point that certification under Article 42 GDPR should essentially focus on data protection and not data security was also raised (p. 13).

The report does not contain an action plan for certification.

***

Find what you’re reading useful? Please consider supporting pdpecho.

 

 

CNIL just published the results of their GDPR public consultation: what’s in store for DPOs and data portability? (Part I)

Gabriela Zanfir Fortuna

The French Data Protection Authority, CNIL, made public this week the report of the public consultation it held between 16 and 19 July 2016 among professionals about the General Data Protection Regulation (GDPR). The public consultation gathered 540 replies from 225 contributors.

The main issues the CNIL focused on in the consultation were four:

  • the data protection officer;
  • the right to data portability;
  • the data protection impact assessments;
  • the certification mechanism.

These are also the four themes in the action plan of the Article 29 Working Party for 2016.

This post (Part I) will summarise the results and action plan for the first two themes, while the last two will be dealt with in a second post (Part II). [Disclaimer: all quotations are translated from French].

1) On the data protection officer

According to Article 37 GDPR, both the controller and the processor must designate a data protection officer where the processing is carried out by a public authority (1)(a), where their core activities consist of processing operations which require regular and systematic monitoring of data subjects on a large scale (1)(b) and where their core activities consist of processing sensitive data on a large scale (1)(c).

The report reveals that there are many more questions than answers or opinions about how Article 37 should be applied in practice. In fact, most of the contributions are questions from the contributors (see pages 2 to 4). They raise interesting points, such as:

  • What is considered to be a conflict of interest – who will not be able to be appointed?
  • Should the DPO be appointed before May 2018 (when GDPR becomes applicable)?
  • Will the CNIL validate the mandatory or the optional designation of a DPO?
  • Which will exactly be the role of the DPO in the initiative for and in the drafting of the data protection impact assessments?
  • Which are the internal consequences if the recommendations of the DPO are not respected?
  • Is it possible that the DPO becomes liable under Criminal law for how he/she monitors compliance with the GDPR?
  • Should the DPO be in charge of keeping the register of processing operations and Should the register be communicated to the public?
  • Should only the contact details of the DPO be published, or also his/her identity?
  • Must the obligations in the GDPR be applied also for the appointment of the DPO that is made voluntarily (outside the three scenarios in Article37(1))?
  • Can a DPO be, in fact, a team? Can a DPO be a legal person?
  • Are there any special conditions with regard to the DPO for small and medium enterprises?

The CNIL underlines that for this topic an important contribution was brought by large professional associations during discussions, in addition to the large number of replies received online.

In fact, according to the report, the CNIL acknowledges “the big expectations of professional associations  and federations to receive clarifications with regard to the function of the DPO, as they want to prepare as soon as possible and in a sustainable way for the new obligations” (p. 5).

As for future steps, the CNIL recalls that the Article 29 Working Party will publish Guidelines to help controllers in a practical manner, according to the 2016 action plan. (There’s not much left of 2016, so hopefully we’ll see the Guidelines soon!). The CNIL announces they will also launch some national communication campaigns and they will intensify the training sessions and workshops with the current CILs (Correspondants Informatique et Libertés – a role similar to that of a DPO).

2) On the right to data portability

new-note-2

Article 20 GDPR provides that the data subject has the right to receive a copy of their data in a structured, commonly used and machine-readable format and has the right to transmit those data to another controller only if the processing is based on consent or on a contract.

First, the CNIL notes that there was “a very strong participation of the private sector submitting opinions or queries regarding the right to data portability, being interesting especially about the field of application of the new right, the expenses its application will require and about its consequences on competition” (p. 6).

According to the report, the right to data portability it’s perceived as an instrument that allows regaining the trust of persons about processing of their personal data, bringing more transparency and more control over the processing operation (p. 6).

On another hand, the organisations that replied to the public consultation are concerned about the additional investments they will need to make to implement this right. They are also concerned about (p. 6):

  • “the risk of creating an imbalance in competition between European and American companies, as European companies are directly under the obligation to comply with this right, whereas American companies may try to circumvent the rules”. My comment here would be that they should not be concerned about that, because if they target the same European public to offer services, American companies will also be under a direct obligation to comply with this right.
  • “the immediate cost of implementing this right (for instance, the development of automatic means to extract data from databases), which cannot be charged to the individuals, but which will be a part of the management costs and will increase the costs for the services”.
  • “the level of responsibility if the data are mishandled or if the data handed over to the person are not up to date”.

The respondents to the public consultation seem to be a good resource for technical options to use in terms of the format needed to transfer data. Respondents argued in favor of open source formats, which will make reusing the data easier and which will be cheaper compared to proprietary solutions. Another suggested solution is the development of Application Program Interfaces (APIs) based on open standards, without a specific licence key. This way the persons will be able to use the tools of their choice.

One of the needs that emerged from the consultation was to clarify whether the data that are subject to the right to portability must be raw data, or whether transferring a “summary” of the data would suffice. Another question was whether the data could be asked for by a competing company, with a mandate from the data subject. There were also questions regarding the interplay of the right to data portability and the right of access, or asking how could data security be ensured for the transfer of the “ported” data.

In the concluding part, the CNIL acknowledges that two trends could already be seen within the replies: on the one hand, companies tend to want to limit as much as possible the applicability of the right to data portability, while on the other hand, the representatives of the civil society are looking to encourage persons to take their data in their own hands and to reinvent their use (p. 10).

According to the report, the Technology Subgroup of the Article 29 Working Party is currently drafting guidelines with regard to the right to data portability. “They will clarify the field of application of this right, taking into account all the questions raised by the participants to the consultation, and they will also details ways to reply to portability requests”, according to the report (p. 10).

***

Find what you’re reading useful? Consider supporting pdpecho.

Click HERE for Part II of this post.

A look at political psychological targeting, EU data protection law and the US elections

Cambridge Analytica, a company that uses “data modeling and psychographic profiling” (according to its website), is credited with having decisively contributed to the outcome of the presidential election in the U.S.. They did so by using “a hyper-targeted psychological approach” allowing them to see trends among voters that no one else saw and thus to model the speech of the candidate to resonate with those trends. According to Mashable, the same company also assisted the Leave. EU campaign that leaded to Brexit.

How do they do it?

“We collect up to 5,000 data points on over 220 million Americans, and use more than 100 data variables to model target audience groups and predict the behavior of like-minded people” (my emphasis), states their website (for comparison, the US has a 324 million population). They further explain that “when you go beneath the surface and learn what people really care about you can create fully integrated engagement strategies that connect with every person at the individual level” (my emphasis).

According to Mashable, the company “uses a psychological approach to polling, harvesting billions of data from social media, credit card histories, voting records, consumer data, purchase history, supermarket loyalty schemes, phone calls, field operatives, Facebook surveys and TV watching habits“. This data “is bought or licensed from brokers or sourced from social media”.

(For a person who dedicated their professional life to personal data protection this sounds chilling.)

Legal implications

Under US privacy law this kind of practice seems to have no legal implications, as it doesn’t involve processing by any authority of the state, it’s not a matter of consumer protection and it doesn’t seem to fall, prima facie, under any piece of the piecemeal legislation dealing with personal data in the U.S. (please correct me if I’m wrong).

Under EU data protection law, this practice would raise a series of serious questions (see below), without even getting into the debate of whether this sort of intimate profiling would also breach the right to private life as protected by Article 7 of the EU Charter of Fundamental Rights and Article 8 of the European Convention of Human Rights (the right to personal data protection and the right to private life are protected separately in the EU legal order). Put it simple, the right to data protection enshrines the “rules of the road” (safeguards) for data that is being processed on a lawful ground, while the right to private life protects the inner private sphere of a person altogether, meaning that it can prohibit the unjustified interferences in the person’s private life. This post will only look at mass psychological profiling from the data protection perspective.

Does EU data protection law apply to the political profilers targeting US voters?

But why would EU data protection law even be applicable to a company creating profiles of 220 million Americans? Surprisingly, EU data protection law could indeed be relevant in this case, if it turns out that the company carrying out the profiling is based in the UK (London-based), as several websites claim in their articles (here, here and here).

Under Article 4(1)(a) of Directive 95/46, the national provisions adopted pursuant to the directive shall apply “where the processing is carried out in the context of the activities of an establishment of the controller on the territory of the Member State“. Therefore, the territorial application of Directive 95/46 is triggered by the place of establishment of the controller.  Moreover, Recital 18 of the Directive’s Preamble explains that “in order to ensure that individuals are not deprived of the protection to which they are entitled under this Directive, any processing of personal data in the Community (EU – n.) must be carried out in accordance with the law of one of the Member States” and that “in this connection, processing carried out under the responsibility of a controller who is established in a Member State should be governed by the law of that State” (see also CJEU Case C-230/14 Weltimmo, paras. 24, 25, 26).

There are, therefore, no exceptions to applying EU data protection rules to any processing of personal data that is carried out under the responsibility of a controller established in a Member State. Is it relevant here whether the data subjects are not European citizens, and whether they would not even be physically located within Europe? The answer is probably in the negative. Directive 95/46 provides that the data subjects it protects are “identified or identifiable natural persons“, without differentiating them based on their nationality. Neither does the Directive link its application to any territorial factor concerning the data subjects. Moreover, according to Article 8 of the EU Charter of Fundamental Rights, “everyone has the right to the protection of personal data concerning him or her”.

I must emphasise here that the Court of Justice of the EU is the only authority that can interpret EU law in a binding manner and that until the Court decides how to interpret EU law in a specific case, we can only engage in argumentative exercises. If the interpretation proposed above would be found to have some merit, it would indeed be somewhat ironic to have the data of 220 million Americans protected by EU data protection rules.

What safeguards do persons have against psychological profiling for political purposes?

This kind of psychological profiling for political purposes would raise a number of serious questions. First of all, there is the question of whether this processing operation involves processing of “special categories of data”. According to Article 8(1) of Directive 95/46, “Member States shall prohibit the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and the processing of data concerning health or sex life.” There are several exceptions to this prohibition, of which only two would conceivably be applicable to this kind of profiling:

  • if the data subject has given his explicit consent to the processing of those data (letter a) or
  • the processing relates to data which are manifestly made public by the data subject (letter e).

In order for this kind of psychological profiling to be lawful, the controller must obtain explicit consent to process all the points of data used for every person profiled. Or the controller must only use those data points that were manifestly made public by a person.

Moreover, under Article 15(1) of Directive 95/46, the person has the right “not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc.”. It is of course to be interpreted to what extent psychological profiling for political purposes produces legal effects or significantly affects the person.

Another problem concerns the obligation of the controller to inform every person concerned that this kind of profiling is taking place (Articles 10 and 11 of Directive 95/46) and to give them details about the identity of the controller, the purposes of the processing and all the personal data that is being processed. In addition, the person should be informed that he or she has the right to ask for a copy of the data the controller holds about him or her and the right to ask for the erasure of that data if it was processed unlawfully (Article 12 of Directive 95/46).

Significantly, the person has the right to opt-out of a processing operation, at any time, without giving reasons, if that data is being processed for the purposes of direct marketing (Article 14(b) of Directive 95/46). For instance, in the UK, the supervisory authority – the Information Commissioner’s Office, issued Guidance for political campaigns in 2014 and gave the example of “a telephone call which seeks an individual’s opinions in order to use that data to identify those people likely to support the political party or referendum campaign at a future date in order to target them with marketing” as constituting direct marketing.

Some thoughts

  • The analysis of how EU data protection law is relevant for this kind of profiling would be more poignant if it would be made under the General Data Protection Regulation, which will become applicable on 25 May 2018 and which has a special provision for profiling.
  • The biggest ever fine issued by the supervisory authority in the UK is 350.000 pounds, this year. Under the GDPR, breaches of data protection rules will lead to fines up to 20 million euro or 4% of the controller’s global annual turnover for the previous year, whichever is higher.
  • If any company based in the UK used this kind of psychological profiling and micro-targeting for the Brexit campaign, that processing operation would undoubtedly fall under the rules of EU data protection law. This stands true of any analytics company that provides these services to political parties anywhere in the EU using personal data of EU persons. Perhaps this is a good time to revisit the discussion we had at CPDP2016 on political behavioural targeting (who would have thought the topic will gain so much momentum this year?)
  • I wonder if data protection rules should be the only “wall (?)” between this sort of targeted-political-message-generating campaign profiling and the outcome of democratic elections.
  • Talking about ethics, data protection and big data together is becoming more urgent everyday.

***

Find what you’re reading useful? Consider supporting pdpecho.

Fresh EU data protection compliance guidance for mobile apps, from the EDPS

The European Data Protection Supervisor adopted this week “Guidelines on the protection of personal data processed by mobile applications provided by European Union institutions”.

While the guidelines are addressed to the EU bodies that provide mobile apps to interact with citizens (considering the mandate of the EDPS is to supervise how EU bodies process data), the guidance is just as valuable to all controllers processing data via mobile apps.

The Guidelines acknowledge that “mobile applications use the specific functions of smart mobile devices like portability, variety of sensors (camera, microphone, location detector…) and increase their functionality to provide great value to their users. However, their use entails specific data protection risks due to the easiness of collecting great quantities of personal data and a potential lack of data protection safeguards.”

Managing consent

One of the most difficult data protection issues that controllers of processing operations through mobile apps face is complying with the consent requirements. The Guidelines provide valuable guidance on how to obtain valid consent (see paragraphs 25 to 29).

  • Adequately inform users and obtain their consent before installing any application on user’s smart mobile device
  • Users have to be given the option to change their wishes and revoke their decision at any time.
  • Consent needs to be collected before any reading or storing of information from/onto the smart mobile device is done.
  • An essential element of consent is the information provided to the user. The type and accuracy of the information provided needs to be such as to put users in control of the data on their smart mobile device to protect their own privacy.
  • The consent should be specific (highlighting the type of data collected), expressed through active choicefreely given (users should be given the opportunity to make a real choice).
  • The apps must provide users with real choices on personal data processing: the mobile application must ask for granular consent for every category of personal data it processes and every relevant use. If the OS does not allow a granular choice, the mobile application itself must implement this.
  • The mobile application must feature functionalities to revoke users’ consent for each category of personal data processed and each relevant use. The mobile application must also provide functionalities to delete users’ personal data where appropriate.

The Guidelines invite controllers to “analyse the compliance of its intended processing before implementing the mobile application during the feasibility check, business case design or an equivalent early definition stage of the project”. The controller “should take decisions on the design and operation of the planned mobile application based on an information security risk assessment”.

Other recommendations concern:

  • data minimisation – “the mobile application must collect only those data that are strictly necessary to perform the lawful functionalities as identified and planned”.
  • third party components or services – “Assess the data processing features of a third party component or of a third party service before integrating it into a mobile application”.
  • security of processing – “Apply appropriate information security risk management to the development, distribution and operation of mobile applications” (paragraphs 38 to 41).
  • secure development, operation and testing – “The EU institution should have documented secure development policies and processes for mobile applications, including operation and security testing procedures following best practices”.
  • vulnerability management – “Adopt and implement a vulnerability management process appropriate to the development and distribution of mobile applications” (paragraphs 47 to 51).
  • protection of personal data in transit and at rest – “Personal data needs to be protected when stored in the smart mobile device, e.g. through effective encryption of the personal data”.

 

***

Find what you’re reading useful? Consider supporting pdpecho.

 

 

The problem with the Privacy Shield challenges: do the challengers have legal standing?

by Gabriela Zanfir Fortuna

privacy shield.jpg

Photo: commerce.org

There are currently two ongoing challenges of the Privacy Shield before the CJEU (one submitted by Digital Rights Ireland and one by a coalition of French NGOs). Before deciding on the merits of these cases, there is a risk that the Court may not consider them admissible based on legal standing rules. The Court is very strict when applying the rules under Article 263(4) TFEU, most of the actions for annulment initiated by natural or legal persons being declared inadmissible due to lack of legal standing. 

European Commission’s adequacy decision for transfers of personal data between the EU and the US under the Privacy Shield framework was challenged directly before the Court of Justice of the EU – the Grand Chamber to be more precise, under the procedure for “actions for annulment” enshrined in Article 263 TFEU.

An “action for annulment” under Article 263 TFEU allows the CJEU to “review the legality of legislative acts, of acts of the Council, of the Commission and of the European Central Bank, other than recommendations and opinions, and of acts of the European Parliament and of the European Council intended to produce legal effects vis-à-vis third parties”.

Such actions can be brought by three categories of applicants.

The privileged applicants – any “Member State, the European Parliament, the Council or the Commission on grounds of lack of competence, infringement of an essential procedural requirement, infringement of the Treaties or of any rule of law relating to their application, or misuse of powers”, according to the second paragraph of Article 263.

A second category of challengers is defined in the third paragraph of Article 263: the Court of Auditors, the European Central Bank and the Committee of the Regions. They can bring actions for annulment before the Court only “for the purpose of protecting their prerogatives”.

Finally, a third category of challengers comprises “any natural or legal person”, according to the fourth paragraph of Article 263 TFEU. But for private parties to actually have legal standing for such actions, the conditions to be met are quite strict (this is why they are also known as “non-privileged applicants”). In fact, there are only three instances where such an action is declared admissible:

  1. if the act is addressed to that person or
  2. if the act is of direct and individual concern to them or
  3. if the act is “a regulatory act which is of direct concern to them and does not entail implementing measures”.

The third possibility was introduced by the Treaty of Lisbon, in 2009, and was meant to address the critique that individuals did not have a real possibility to challenge EU acts, due to the very strict application of the “direct and individual concern” test by the Court.

As it was explained by scholars, “particularly the requirement that the act be of individual concern proves in practice to be a hurdle that is virtually insurmountable” (1). According to the much criticised Plaumann test, the Court established that “persons other than those to whom a decision is addressed may only claim to be individually concerned if that decision affects them by reason of certain attributes which are peculiar to them or by reason of circumstances in which they are differentiated from all other persons and by virtue of these factors distinguishes them individually just as in the case of the person addressed” (Case 25/62 Plaumann v. Commission, 15 July 1963).

To understand how the Court applies the Plaumann test, a very good example is the Toepfer case (Case 106-107/63).

The Court will however grant standing to those who can show that the category of applicant into which they fall is closed, that is, incapable of taking any new members; an example is Toepfer, where a certain decision of the German government to delay the granting of a licence to import grain only affected those who had applied for the licence on 1st October 1963. As this was a completed past event, the category of grain importers applying on that day (which of course included the applicant) was closed to any new members. Mr Toepfer was thus individually concerned.” – R. Lang, “Quite a challenge: Article 263(4) TFEU and the case of the mystery measures”, p. 4-5.

The Plaumann test survived decades of challenges, including a decision of the Court of First Instance (Case T-177/01 Jégo-Quéré, see particularly paragraph 51) that tried to reform it but that was quashed in appeal by the Court of Justice. The Court of First Instance argued that denying legal standing to the applicants in this case meant they would have no right to an effective remedy, due to their particular circumstance. The Court of Justice, in appeal, did not give merit to this argument.

Some nuances have been added to the Plaumann test for different areas of law, but the essence remained the same. For instance, the Court detailed additional conditions for private parties that could be individually concerned by provisions of regulations imposing anti-dumping duties (see Cases T-112/14 to T-116/14, T-119/14 Molinos Rio de la Palata from 15 September 2016, paras 43 to 45). These conditions, however, apply subsequently to the Plaumann test (see para 40 from the Molinos Rio de la Plata cases).

Therefore, it will be extremely difficult, if not impossible, for the NGOs that initiated the actions for annulment of the Commission’s adequacy decision to meet the Plaumann test. If they will manage to do it, this will come with a change of settled case-law.

However, there is another line of argumentation that the NGOs could use and that would have more chances of success. They could use the third limb of Article 263(4), the one introduced in 2009 by the Treaty of Lisbon that allows challenges by private parties of regulatory acts which are of direct concern to them and which do not entail implementing measures.

This way, the applicants will not have to prove they are individually concerned by the act, so the Plaumann test will not be applicable. However, they will enter a new, almost uncharted field: regulatory acts which do not entail implementing measures.

They will have to prove that:

  • the adequacy decision is a regulatory act;
  • the adequacy decision is of direct concern to them;
  • the adequacy decision does not entail any implementing measures.
  1. Is the adequacy decision a regulatory act?

According to case-law following the entry into force of the Lisbon Treaty and the changes that were brought to Article 263(4), “the meaning of ‘regulatory act’ for the purposes of the fourth paragraph of Article 263 TFEU must be understood as covering all acts of general application apart from legislative acts” (Case T‑18/10 Inuit Tapiriit Kanatami and Others v Parliament and Council, 6 September 2011, para 56; Case T-262/10 Microban 25 October 2011, para 21).

In Microban, the Court found that the Commission Decision at issue was adopted “in the exercise of implementing powers and not in the exercise of legislative powers” (para 22), which confirmed its nature of a “regulatory act”. Further, the Court also took into account that “the contested decision is of general application in that it applies to objectively determined situations and it produces legal effects with respect to categories of persons envisaged in general and in the abstract” (para 23).

As the adequacy decision was adopted by the Commission in the exercise of implementing powers (following Directive 95/46), and as it is of general application, producing legal effects to categories of persons envisaged in general and in the abstract, it will most probably be classified as a “regulatory act” for the purposes of Article 263(4) TFEU.

However, there are two more conditions to be met cumulatively before the actions are declared admissible.

2. Are the applicants directly concerned by the act?

The Court uses several criteria to establish there is a “direct concern”.

The classic test the Court usually uses is the following: “firstly, the contested Community measure must directly affect the legal situation of the individual and, secondly, it must leave no discretion to its addressees, who are entrusted with the task of implementing it, such implementation being purely automatic and resulting from Community rules without the application of other intermediate rules” (Case C‑386/96 P Dreyfus v Commission, para 43, Joined Cases C‑445/07 P and C‑455/07 P Commission v Ente per le Ville vesuviane and Ente per le Ville vesuviane v Commission, para 45; Microban, para 27).

For instance, in Microban this test was met because the contested decision prohibited the marketing of materials containing triclosan. The applicants bought triclosan and used it to manufacture a product, which was further sold on for use in the manufacture of plastic materials. Therefore, the Court considered “the contested decision directly affects their legal position” (para 28).

On another hand, in a very recent case, the Court found that “no provision of the contested act is directly applicable to the applicants, in the sense that it would confer rights or impose obligations on them. Consequently, the contested act does not affect their legal position, and therefore the condition of direct concern, as referred to in the second and third situation referred to in the fourth paragraph of Article 263 TFEU, is not met” (Case T-600/15 Pesticide Action Network Europe, 28 September 2016, para 62).

This case concerned an action brought by an environmental NGO and different associations of beekepeers that challenged an Implementing Regulation approving the use of a substance called sulfoxaflor as pesticide. The Court dismissed all the arguments brought forward by the applicants to prove they were directly concerned by this act: starting with a claim that it touched the right of property and the right to conduct business of the beekeepers – due to the harmful effect of sulfoxaflor on bees, to the claim that the applicants participated in the decision making process for the Implementing Regulation, to the claim that refusing their legal standing breached their right to environmental protection under Article 37 of the Charter and their right to effective judicial remedy under Article 47 of the Charter (see paras 46 to 50).

Thus, it will not be easy to argue that the adequacy decision is of direct concern to the applicants. For instance, it could be argued that the decision primarily impacts the legal situation of controllers (and not that of data subjects) who are allowed to transfer personal data pursuant to this decision.

However, it will neither be impossible to argue the direct concern of data subjects, represented by the applicant NGOs. A first argument, perhaps of a general nature, would be that the purpose of the Decision is to establish that companies adhering to the Privacy Shield ensure an adequate level of protection of personal data with the level of protection afforded in the EU, having the consequence that transfers of personal data to those companies will automatically take place, without any further safeguard and without any additional scrutiny or authorisation. Therefore, it affects the legal situation of individuals in the EU whose data are transferred, as they will not be able to oppose the transfer before it takes place.

An objective argument could be the recognition of the rights of the data subject in Annex II of the Decision (the Privacy Shield Principles) – admitting therefore that the Decision, through its Annex, grants rights to individuals represented by the applicants.

Another argument could also be the finding of the Court in Schrems that legislation allowing mass-surveillance and access to content of communications touches the essence of the fundamental right to private life as enshrined in Article 7 of the Charter (see Schrems C-362/14, paras 93 and 94). Therefore, a regulatory act that has as direct consequence transfers of personal data to a legal system that allows such a fundamental breach of Article 7 of the Charter as directly affecting the legal situation of data subjects represented by the applicant NGOs. But for the Court to take this argument into account would mean to acknowledge the existence of mass-surveillance and access to content of communications in the US, at the time when the decision was adopted.

3. Does the adequacy decision entail implementing measures?

This will be the most difficult criterion to be met. The case-law of the Court regarding what can constitute implementing measures is very strict (from the point of view of granting legal standing), in the sense that the Court applies the concept of “implementing measures” for the purposes of Article 263(4) TFEU lato sensu.

For instance, in a landmark judgment in this area, T & L Sugars (case C-456/13, 28 April 2015), concerning an implementing regulation, “the measures at the Member States’ level consisted of receiving applications from economic operators, checking their admissibility, submitting them to the Commission and then issuing licences on the basis of the allocation coefficients fixed by the Commission” (as summarised here). So, even if AG Cruz Villalón “concluded that such non-substantive, or ‘ancillary’, measures […] by the national authorities […] in the exercise of a circumscribed power” or a “purely administrative activity” are not implementing measures (Opinion in Case C-456/13 P, T & L Sugars, para. 31 and 34)” (2), the Court found that “the decisions of the national authorities granting such certificates, which apply the coefficients fixed by Implementing Regulation No 393/2011 to the operators concerned, and the decisions refusing such certificates in full or in part therefore constitute implementing measures” (para 40).

Article 5 of the Privacy Shield adequacy decision states that “Member States shall take all the measures necessary to comply with this Decision”. Therefore, it allows further administrative measures by the Member States. But what are those measures in practice? Could the Court consider they are ancillary enough so as not to amount to “implementing measures”?

On another hand, it is also clear that before the adequacy decision takes effect, a US company must go through an administrative procedure which could amount to a certification procedure similar to the one in the T&L Sugars case. But in this case, will it matter that the alleged “implementing measures” must be taken by a third country and not by a Member State?

Conclusion

In conclusion, the problem of legal standing of the applicants in the two cases challenging the Privacy Shield decision is not at all an easy one. The odds (based on existing case-law) seem to be leaning more towards an inadmissibility of the actions for annulment. But this is why a “legal precedent” system is exciting: the Court can always nuance and, if necessary, change its case-law depending on the particular elements of each case.

However, if these actions will be declared inadmissible, it does not mean that the NGOs concerned will not be able to challenge the Privacy Shield decision in national courts, bringing the case to the CJEU afterwards via the preliminary ruling procedure based on Article 267 TFEU. In fact, even an inadmissible decision will help their subsequent actions at national level, considering that their request to submit preliminary ruling questions to the CJEU will not be able to be dismissed by the national courts due to the fact that they did not challenge the decision directly following Article 263 TFEU (considering the possibility they could have had legal standing).

Whatever the outcome of these two challenges, the decision of the Court will be very important for the “legal standing of natural and legal persons” doctrine in general, on one hand, and for the application of Article 263(4) TFEU to the different acts of the future European Data Protection Board (see Recital 143 of the GDPR), on the other hand.

…………………………………………………………………

(1) Jan H. Jans, On Inuit and Judicial Protection in a Shared Legal Order, European Environmental Law Review, August 2012, p. 189.

(2) Jasper Krommendijk, The seal product cases: the ECJ’s silence on admissibility in Inuit Tapiriit Kanatami II, available here.

***

Find what you’re reading useful? Consider supporting pdpecho.

The GDPR already started to appear in CJEU’s soft case-law (AG Opinion in Manni)

CJEU’s AG Bot referred to the GDPR in his recent ‘right to be forgotten’ Opinion

It may only become applicable on 25 May 2018, but the GDPR already made its official debut in the case-law of the CJEU.

It was the last paragraph (§101) of the Conclusions of AG Bot in Case C-398/15 Manni, published on 8 September, that specifically referred to Regulation 2016/679 (the official name of the GDPR). The case concerns the question of whether the right to erasure (the accurate name of the more famous “right to be forgotten”) as enshrined in Article 12 of Directive 95/46 also applies in the case of personal data of entrepreneurs recorded in the Public Registry of companies, if their organisation went bankrupt years ago. Curiously, the preliminary ruling question doesn’t specifically refer to the right to erasure, but to the obligation in Article 6(1)(e) for controllers not to retain the data longer than necessary to achieve the purpose for which they were collected.

In fact, Mr Manni had requested his regional Chamber of Commerce to erase his personal data from the Public Registry of Companies, after he found out that he was losing clients who performed background checks on him through a private company that specialised in finding information in the Public Registry. This happened because Mr Manni had been an administrator of a company that was declared bankrupt more than 10 years before the facts in the main proceedings. In fact, the former company itself was radiated from the Public Registry (§30).

Disclaimer! The Opinion is not yet available in English, but in another handful of official languages of the EU. Therefore, the following quotes are all my translation from French or Romanian.

AG Bot advised the Court to reply to the preliminary ruling questions in the sense that all personal data in the Public Registry of companies should be retained there indefinitely, irrespective of the fact that companies to whose administrators the data refer are still active or not. “Public Registries of companies cannot achieve their main purpose, namely the consolidation of legal certainty by disclosing, in accordance with the transparency principle, legally accurate information, if access to this information would not be allowed indefinitely to all third parties” (§98).

The AG adds that “the choice of natural persons to get involved in the economic life through a commercial company implies a permanent requirement of transparency. For this main reason, detailed throughout the Opinion, I consider that the interference in the the right to the protection of personal data that are registered in a Public Registry of companies, specifically ensuring their publicity for an indefinite period of time and aimed towards any person who asks for access to these data, is justified by the preponderant interest of third parties to access those data” (§100).

Restricting the circle of ‘interested third parties’ would be incompatible with the purpose of the Public Registry

Before reaching this conclusion, the AG dismissed a proposal by the Commission that suggested a limited access to the personal data of administrators of bankrupt companies could be ensured only for those third parties that “show a legitimate interest” in obtaining it.

The AG considered that this suggestion “cannot, at this stage of development of EU law, ensure a fair balance between the objective of protecting third parties and the right to the protection of personal data registered in Public Registries of companies” (§87). In this regard, he recalled that the objective to protect the interest of third parties as enshrined in the First Council Directive 68/151  “is provided for in a sufficiently wide manner so as to encompass not only the creditors of a company, but also, in general, all persons that want to obtain information regarding that company” (§88).

Earlier, the AG had also found that the suggestion to anonymise data regarding the administrators of bankrupt companies is not compatible with the historical function of the Public Registry and with the objective to protect third parties that is inherent to such registries. “The objective to establish a full picture of a bankrupt company is incompatible with processing anonymous data” (§78).

Throughout the Opinion, the AG mainly interprets the principles underpinning the First Council Directive 68/151/EC (of 9 March 1968 on co-ordination of safeguards which, for the protection of the interests of members and others, are required by Member States of companies within the meaning of the second paragraph of Article 58 of the Treaty, with a view to making such safeguards equivalent throughout the Community)  and it is apparent that it enjoys precedence over Directive 95/46/EC.

Finally: the reference to the GDPR

The AG never refers in his analysis to Article 12 of Directive 95/46,  which grants data subjects the right to erasure. However, come the last paragraph of the Opinion, the AG does refer to Article 17(3)(b) and (d) from Regulation (EU) 2016/679 (yes, the GDPR). He applies Article 17 GDPR to the facts of the case and mentions that the preceding analysis “is compatible” with it, because “this Article provides that the right to erasure of personal data, or ‘the right to be forgotten’, does not apply to a processing operation ‘for compliance with a legal obligation which requires processing by Union or Member State law to which the controller is subject or for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller’ or ‘for archiving purposes in the public interest'” (§101).

While I find the Opinion of the AG clear and well argued, I have two comments. I wish he had referred more comprehensively to the fundamental rights aspect of the case when balancing the provisions of the two directives. But most of all, I wish he would have analysed the right to erasure itself, the conditions that trigger it and the exemptions under Article 13 of Directive 95/46.

My bet on the outcome of the case: the Court will follow the AG’s Opinion to a large extent. However, it may be more focused on the fundamental rights aspect of balancing the two Directives and it may actually analyse the content of the right to erasure and its exceptions. The outcome, however, is likely to be the same.

A small thing that bugs me about this case is that I find there is a differentiation between searching a Registry of Companies being interested in a company name and searching a Registry of Companies being interested in a specific natural person. I mean, all third parties may very well be interested in finding out everything there is to know about bankrupt Company X, discovering thus that Mr Manni was the administrator. To me, this does not seem to be the same situation as searching the Public Registry of companies using Mr Manni’s name to find out all about Mr Manni’s background. In §88 the AG even mentions, when recognising the all encompassing interest of every third party to access all information about a certain company indefinitely, that Directive 68/151 protects the interest of “all persons that want to obtain information regarding this company“. I know the case is about keeping or deleting the personal data of Mr Manni from the Registry. And ultimately it is important to keep the information there due to the general interest of knowing everything about the history of a company. However, does it make any difference for the lawfulness of certain processing operations related to the data in the Registry that the Registry of companies is used to create profiles of natural persons? I don’t know. But it’s something that bugged me while reading the Opinion. Moreover, if you compare this situation to the “clean slate” rules for certain offenders that have their data erased from the criminal record, it is even more bugging.  (Note: at §34 the AG specifies he is only referring in his Opinion to the processing of personal data by the Chamber of Commerce and not by private companies specialising in providing background information about entrepreneurs).

Fun fact #1

The GDPR made its ‘unofficial’ debut in the case-law of the CJEU in the Opinion of AG Jaaskinen in C-131/14 Google v. Spain delivered on 25 June 2013. In fact, it was precisely Article 17 that was referred to in this Opinion as well, in §110. There’s another reference to the GDPR in §56, mentioning the new rules on the field of application of EU data protection law. Back then, the text of the GDPR was merely a proposal of the Commission – nor the EP, or the Council had adopted their own versions of the text, before entering the trilogue which resulted in the adopted text of Regulation 2016/679.

Fun fact #2

AG Bot is the AG that the delivered the Opinion in the Schrems case as well. The Court followed his Opinion to a large extent for its Judgment. There are fair chances the Court will follow again his Opinion.

***

Find what you’re reading useful? Consider supporting pdpecho.