Category Archives: News

Will the ePrivacy Regulation overshadow the GDPR in the age of IoT?

by Gabriela Zanfir-Fortuna

The ePrivacy draft regulation, published by the European Commission on Jan. 10, updates and upgrades Directive 2002/58/EC (the “ePrivacy directive”), the source of the infamous “cookies banner.” Under its official name – Proposal for a Regulation Concerning the Respect for Private Life and the Protection of Personal Data in Electronic Communications, the draft ePrivacy regulation reorganizes and even re-conceptualizes the system of protecting the privacy of electronic communications.

Armed with equally large fines and equally wide territorial application, the future ePrivacy rules may end up overshadowing the GDPR in the age of the internet of things due to their wide material scope of application which could potentially cover all data related to connected devices.

Protecting the fundamental right to confidentiality

With the proposal for an ePrivacy regulation distinct from the GDPR, the EU makes it clear that the two sets of rules correspond to different fundamental rights: The GDPR is primarily an expression of the fundamental right to the protection of personal data as enshrined in Article 8 of the EU Charter of Fundamental Rights, while the ePrivacy draft regulation details the right to respect for private life, as enshrined in Article 7 of the Charter (see Recital 1 of the proposal).

This differentiation is of great consequence, affecting the manner in which EU courts will interpret and apply the rules. The protection of the right to private life is construed so as to restrict interferences to the private life to the minimum, whereas the right to the protection of personal data is construed so as to provide for “rules of the road” on how personal data must be used.

A telling example of ePrivacy rules intended to protect private life is the reference in the draft to the fact that “terminal equipment of end-users of electronic communications networks and any information relating to the usage of such terminal equipment … are part of the private sphere of the end-users requiring protection under the Charter of Fundamental Rights of the EU and the European Convention of Human Rights” (emphasis added). (Recital 20)

In addition, many member states (e.g., Germany, Italy, Netherlands, Greece, Romania, Denmark, Belgium, Poland, Estonia, Bulgaria, Czech Republic) provide in their Constitutions for a separate right to the “secret of correspondence” (distinct from the right to respect for private life), which can be restricted under limited situations and usually only for objectives of public safety. The rules of the proposal fall under the scope of this particular right, being thus capable of igniting national constitutional reviews – for instance, concerning the rules that allow access to content of communications.

Electronic communications data and information on smart devices, the centerpiece of the ePrivacy framework

The new system of protecting the confidentiality of communications is built around two concepts:

  • “Electronic communications data,” which includes electronic communications content and electronic communications metadata; and
  • “Information related to the terminal equipment of end-users.”

Article 2(1) of the proposal establishes that the regulation “applies to processing of electronic communications data carried out in connection with the provision and the use of electronic communications services and to information related to the terminal equipment of end-users”.

This represents an important change compared to the current regime of the ePrivacy directive, which is centered on processing personal data (see Article 3(1) of Directive 2002/58). For the new rules to be applicable, it will not matter whether the data going through electronic communications channels fall under the GDPR definition of personal data or not. Recital 4 of the proposal explains that “electronic communications data may include personal data as defined in [the GDPR],” meaning they may also not include personal data.

This article was originally published on iapp.org. Read the rest of the article HERE.

WP29 published its 2017 priorities for GDPR guidance

The Article 29 Working Party published in mid January the new set of priorities for providing GDPR guidance for 2017. This happened after WP29 published in December three sets of much awaited Guidelines on the application of the GDPR: on Data Protection Officers, on the right to data portability and on identifying the lead supervisory authority (pdpEcho intends to provide a closer look to all of them in following weeks). So what are the new priorities?

First of all, WP29 committed to finalise what was started in 2016 and was not adopted/finalised by the end of the year:

  • Guidelines on the certification mechanism;
  • Guidelines on processing likely to result in a high risk and Data Protection Impact Assessments;
  • Guidance on administrative fines;
  • Setting up admin details of the European Data Protection Board (e.g. IT, human resources, service level agreements and budget);
  • Preparing the one-stop-shop and the EDPB consistency mechanism

Secondly, WP29 engaged to start assessments and provide guidance for.

  • Consent;
  • Profiling;
  • Transparency.

Lastly, in order to take into account the changes brought by the GDPR, WP29 intends to update the already existing guidance on:

  • International data transfers;
  • Data breach notifications.

If you want to be a part of the process, there are good news. WP29 wants to organise another FabLab on April 5 and 6 on the new priorities for 2017, where “interested stakeholders will be invited to present their views and comments”. For more details, regularly check this link.

It seems we’re going to have a busy year.

 

What’s new in research: networks of control built on digital tracking, new models of internet governance

pdpEcho is kicking off 2017 with a brief catalogue of interesting recently published research that sets the tone for the new year.

1474043869-800pxFirst, Wolfie Christl and Sarah Spiekermann‘s report on “Networks of Control”, published last month, is a must read for anyone that wants to understand how the digital economy functions on the streams of data we all generate, while reflecting on the ethical implications of this economic model and proposing new models that would try keep the surveillance society afar. Second, a new report of the Global Commission of Internet Governance explores global governance gaps created by existing global governance structures developed in the analog age. Third, the American Academy of Sciences recently published a report with concrete proposals on how to reconcile the use of different public and private sources of data for government statistics with privacy and confidentiality. Last, a volume by Angela Daly that was recently published by Hart Publishing explores how EU competition law, sector specific regulation, data protection and human rights law could tackle concentrations of power for the benefit of users.

 

  1. “Networks of control. A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy”, by Wolfie Christl, Sarah Spiekermann  [OPEN ACCESS]

“Around the same time as Apple introduced its first smartphone and Facebook reached 30 million users in 2007, online advertisers started to use individual-level data to profile and target users individually (Deighton and Johnson 2013, p. 45). Less than ten years later, ubiquitous and real-time corporate surveillance has become a “convenient byproduct of ordinary daily transactions and interactions” (De Zwart et al 2014, p. 746). We have entered a surveillance society as David Lyon foresaw it already in the early 1990s; a society in which the practices of “social sorting”, the permanent monitoring and classification of the whole population through information technology and software algorithms, have silently become an everyday reality” (p. 118).

One of the realities we need to take into account when assessing this phenomenon is that “Opting out of digital tracking becomes increasingly difficult. Individuals can hardly avoid consenting to data collection without opting out of much of modern life. In addition, persons who don’t participate in data collection, who don’t have social networking accounts or too thin credit reports, could be judged as “suspicious” and “too risky” in advance” (p. 129).

The authors of the report explain that the title “Networks of Control” is justified “by the fact that there is not one single corporate entity that by itself controls today’s data flows. Many companies co-operate at a large scale to complete their profiles about us through various networks they have built up” (p. 7). They also explain that they want to close a gap created by the fact that “the full degree and scale of personal data collection, use and – in particular – abuse has not been scrutinized closely enough”, despite the fact that “media and special interest groups are aware of these developments for a while now” (p. 7).

What I found valuable in the approach of the study is that it also brings forward a topic that is rarely discussed when analysing Big Data, digital tracking and so on: the attempt of such practices to change behaviour at scale. “Data richness is increasingly used to correct us or incentivize us to correct ourselves. It is used to “nudge” us to act differently. As a result of this continued nudging, influencing and incentivation, our autonomy suffers (p. 7)”.

A chapter authored by Professor Sarah Spiekermann explores the ethical implications of the networks of control. She applies three ethical normative theories to personal data markets: “The Utilitarian calculus, which is the original philosophy underlying modern economics (Mill 1863/1987). The Kantian duty perspective, which has been a cornerstone for what we historically call “The Enlightenment” (Kant 1784/2009), and finally Virtue Ethics, an approach to life that originates in Aristotle’s thinking about human flourishing and has seen considerable revival over the past 30 years (MacIntyre 1984)” (p. 131).

Methodologically, the report is based on “a systematic literature review and analysis of hundreds of documents and builds on previous research by scholars in various disciplines such as computer science, information technology, data security, economics, marketing, law, media studies, sociology and surveillance studies” (p. 10).

2. Global Commission on Internet Governance “Corporate Accountability for a Free and Open Internet”, by Rebecca MacKinnon, Nathalie Maréchal and Priya Kumar  [OPEN ACCESS]

The report shows that “as of July 2016, more than 3.4 billion people were estimated to have joined the global population of Internet users, a population with fastest one-year growth in India (a stunning 30 percent) followed by strong double digit growth in an assortment of countries across Africa (Internet Live Stats 2016a; 2016b)” (p. 1).

“Yet the world’s newest users have less freedom to speak their minds, gain access to information or organize around civil, political and religious interests than those who first logged on to the Internet five years ago” (p. 1).

Within this framework, the report explores the fact that “ICT sector companies have played a prominent role in Internet governance organizations, mechanisms and processes over the past two decades. Companies in other sectors also play an expanding role in global governance. Multinational companies wield more power than many governments over not only digital information flows but also the global flow of goods, services and labour: onethird of world trade is between corporations, and another third is intra-firm, between subsidiaries of the same multinational enterprise” (p. 5).

The authors also look at the tensions between governments and global companies with regard to requests for access to data, to weaken encryption and facilitate censorship in ways that contravene international human rights standards.

3. “Innovations in Federal Statistics: Combining Data Sources While Protecting Privacy”, by National Academy of Sciences [OPEN ACCESS]. 

The tension between privacy on one hand and statistical data and censuses on the other hand compelled the German Constitutional Court to create in the ’80s “the right to informational self-determination”. Could statistics bring a significant reform of such sort to the US? Never say never.

According to epic.org, the US National Academy of Sciences recently published a report that examines how disparate federal data sources can be used for policy research while protecting privacy.

The study shows that in the decentralised US statistical system, there are 13 agencies whose mission is primarily the creation and dissemination of statistics and more than 100 agencies who engage in statistical activities. There is a need for stronger coordination and collaboration to enable access to and evaluation of administrative and private-sector data sources for federal statistics. For this purpose, the report advices that “a new entity or an existing entity should be designated to facilitate secure access to data for statistical purposes to enhance the quality of federal statistics. Privacy protections would have to be fundamental to the mission of this entity“. Moreover, “the data for which it has responsibility would need to have legal protections for confidentiality and be protected using the strongest privacy protocols offered to personally identifiable information while permitting statistical use”.

One of the conclusions of the report is that “Federal statistical agencies should adopt modern database, cryptography, privacy-preserving and privacy-enhancing technologies”. 

4. Private Power, Online Information Flows and EU Law. Mind The Gap, by Angela Daly, Hart Publishing [50 pounds]

“This monograph examines how European Union law and regulation address concentrations of private economic power which impede free information flows on the Internet to the detriment of Internet users’ autonomy. In particular, competition law, sector specific regulation (if it exists), data protection and human rights law are considered and assessed to the extent they can tackle such concentrations of power for the benefit of users.

Using a series of illustrative case studies, of Internet provision, search, mobile devices and app stores, and the cloud, the work demonstrates the gaps that currently exist in EU law and regulation. It is argued that these gaps exist due, in part, to current overarching trends guiding the regulation of economic power, namely neoliberalism, by which only the situation of market failure can invite ex ante rules, buoyed by the lobbying of regulators and legislators by those in possession of such economic power to achieve outcomes which favour their businesses.

Given this systemic, and extra-legal, nature of the reasons as to why the gaps exist, solutions from outside the system are proposed at the end of each case study. This study will appeal to EU competition lawyers and media lawyers.”

Enjoy the read! (Unless the reform of the EU e-Privacy rules is taking much of your time these days – in this case, bookmark the reports of interest and save them for later).
***
Enjoy what you are reading? Consider supporting pdpEcho

Some end-of-the-year good news: People genuinely care about their privacy

Dear followers,

First, I would like to thank you for making this the most successful year in the 5 years life of pdpEcho (I would especially like to thank those who supported the blog and helped me cover, thus, the cost of renting the blog’s .com name). I started this blog when I was in my first year as a PhD student to gather all information I find interesting related to privacy and data protection. At that time I was trying to convince my classic “civilist” supervisor that data protection is also a matter of civil law. And that I could write a civil law thesis on this subject in Romanian, even though Romanian literature on it only counted one book title from 2004. In the five years that followed another book title was added to it and the blog and I grew together (be it at different paces).

In the recent months it offered me a way to keep myself connected to the field while transitioning from Brussels to the US. But most importantly it reminded me constantly that privacy is really not dead, as it has been claimed numerous times. I cared about it, people that daily found this blog cared about it and as long as we care about privacy, it will never die.

I am writing this end-of-the-year post with some very good news from Europe: you and I are not the only ones that care about privacy. A vast majority of Europeans also does. The European Commission published some days ago a Eurobarometer on ePrivacy, as a step towards the launch of the ePrivacy Directive reform later in January.

The results could not have been clearer:

More than nine in ten respondents said it is important that personal information (such as their pictures, contact lists, etc.) on their computer, smartphone or tablet can only be accessed with their permission, and that it is important that the confidentiality of their e-mails and online instant messaging is guaranteed (both 92%)” (source, p. 2).

“More than seven in ten think both of these aspects are very important. More than eight in ten (82%) also say it is important that tools for monitoring their activities online (such as cookies) can only be used with their permission (82%), with 56% of the opinion this is very important” (source, p. 2).

Overwhelming support for encryption

Remarkably, 90% of those asked agreed “they should be able to encrypt their messages and calls, so they can only be read by the recipient”. Almost as many (89%) agree the default settings of their browser should stop their information from being shared (source, p. 3).

Respondents thought it is unacceptable to have their online activities monitored in exchange for unrestricted access to a certain website (64%), or to pay in order not to be monitored when using a website (74%). Almost as many (71%) say it is unacceptable for companies to share information about them without their permission (71%), even if it helps companies provide new services they may like (source, p. 4).

You can find here the detailed report.

Therefore, there is serious cause to believe that our work and energy is well spent in this field.

The new year brings me several publishing projects that I am very much looking forward to, as well as two work projects on this side of the Atlantic. Nevertheless, I hope I will be able to keep up the work on pdpEcho, for which I hope to receive more feedback and even input from you.

In this note, I wish you all a Happy New Year, where all our fundamental rights will be valued and protected!

Gabriela

 

Data retention, only possible under strict necessity: targeted retention and pre-authorised access to retained data

The Court of Justice of the European Union (‘the Court’ or ‘CJEU’) gave a second judgment this week on the compatibility of data retention measures with the fundamental rights of persons as guaranteed by the Charter of Fundamental Rights of the EU (in Joined Cases C-203/15 and C-698/15 Tele2Sverige). The Court confirmed all its findings from the earlier Digital Rights Ireland judgment and took the opportunity to clarify and nuance some of its initial key-findings (for an analysis of the DRI judgment, see my article published in 2015).

The two cases that were joined by the Court emerged in the fallout of the invalidation of the Data Retention Directive by the CJEU in the DRI judgment. Even if that Directive was declared invalid for breaching fundamental rights, most of the national laws that transposed it in the Member States were kept in force invoking Article 15(1) of the ePrivacy Directive. This Article provided for an exception to the rule of ensuring confidentiality of communications, which allowed Member States to “inter alia, adopt legislative measures providing for the retention of data for a limited period justified on the grounds laid down in this paragraph”. What the Member States seem to have disregarded with their decision to keep national data retention laws in force was that the same paragraph, last sentence, provided that “all the measures referred to in this paragraph (including data retention – my note) shall be in accordance with the general principles of Community law” (see §91 and §92 of the judgment). Respect for fundamental rights is one of those principles.

The Tele2Sverige case was initiated by a telecommunications service provider that followed the decision of the Court in DRI and stopped to retain data, because it considered that the national law requiring it do retain data was in breach of EU law. The Swedish authorities did not agree with this interpretation and this is how the Court was given the opportunity to clarify the relationship between national data retention law and EU law after the invalidation of the Data Retention Directive. The Watson case originates in the UK, was initiated by individuals and refers to the Data Retention and Investigatory Powers Act 2014(DRIPA).

In summary, the Court found that “national legislation which, for the purpose of fighting crime, provides for general and indiscriminate retention of all traffic and location data of all subscribers and registered users relating to all means of electronic communication” is in breach of Article 7 (right to private life), Article 8 (right to the protection of personal data) and Article 11 (right to freedom of speech) from the Charter of Fundamental Rights of the EU. The Court clarified that such legislation is precluded by Article 15(1) of the ePrivacy Directive. (See §1 from the executive part of the judgment)

Moreover, the Court found that national legislation in the field of the ePrivacy Directive that regulates the access of competent national authorities to retained data is incompatible with the three fundamental rights mentioned above, as long as:

  1. the objective pursued by that access, in the context of fighting crime, is not restricted solely to fighting serious crime;
  2. access is not subject to prior review by a court or an independent administrative authority;
  3. there is no requirement that the data concerned should be retained within the European Union (§2 of the operative part of the judgment).

There are a couple of remarkable findings of the Court in the Tele2Sverige/Watson judgment, analysed below. Brace yourselves for a long post. But it’s worth it. I’ll be looking at (1) how indiscriminate retention of metadata interferes with freedom of speech, (2) why data retention is merely an exception of the principle of confidentiality of communications and must not become the rule, (3) why the Court considers retaining on a generalised basis metadata is a far-reaching intrusion in the right to private life, (4) what is “targeted retention” and under what conditions the Court sees it acceptable and, finally (5) what is the impact of all of this on the Privacy Shield and PNR schemes.

 

(1) Indiscriminate retention of metadata interferes with freedom of speech

Even though none of the preliminary ruling questions asked the Court to look at compliance of national data retention measures also in the light of Article 11 Charter (freedom of speech), the Court did so by its own motion.

This was needed so that the Court finishes what it began in DRI. In that previous case, the Court referred to Article 11 Charter in §28, replying to a specific preliminary ruling question, by mentioning that:

“it is not inconceivable that the retention of the data in question might have an effect on the use, by subscribers or registered users, of the means of communication covered by that directive and, consequently, on their exercise of the freedom of expression guaranteed by Article 11 of the Charter”.

However, it never analysed if that was the case. In §70, the Court just stated that, after finding the Directive to be invalid because it was not compliant with Articles 7 and 8 of the Charter, “there is no need to examine the validity of Directive 2006/24 in the light of Article 11 of the Charter”.

This time, the Court developed its argument. It started by underlying that data retention legislation such as that at issue in the main proceedings “raises questions relating to compatibility not only with Articles 7 and 8 of the Charter, which are expressly referred to in the questions referred for a preliminary ruling, but also with the freedom of expression guaranteed in Article 11 of the Charter” (§92).

The Court continued by emphasising that the importance of freedom of expression must be taken into consideration when interpreting Article 15(1) of the ePrivacy Directive “in the light of the particular importance accorded to that freedom in any democratic society” (§93). “That fundamental right (freedom of expression), guaranteed in Article 11 of the Charter, constitutes one of the essential foundations of a pluralist, democratic society, and is one of the values on which, under Article 2 TEU, the Union is founded” (§93), it continues.

The Court justifies the link between data retention and freedom of expression by slightly more confidently (compared to DRI) stating that:

“the retention of traffic and location data could nonetheless have an effect on the use of means of electronic communication and, consequently, on the exercise by the users thereof of their freedom of expression, guaranteed in Article 11 of the Charter” (§101)

The operative part of the judgment clearly states that Articles 7, 8 and 11 of the Charter preclude data retention legislation such as that in the main proceedings.

(2) The exception to the “principle of confidentiality” must not become the rule

The Court refers several times to a “principle of confidentiality of communications” (§85, §90, §95, §115). It explains in §85 that this principle is established by the ePrivacy Directive and “implies, inter alia, (…) that, as a general rule, any person other than the users is prohibited from storing, without the consent of the users concerned, the traffic data related to electronic communications. The only exceptions relate to persons lawfully authorised in accordance with Article 15(1) of that directive and to the technical storage necessary for conveyance of a communication.”

With regard to the first exception, the Court recalls that, because Article 15(1) is construed so as “to restrict the scope of the obligation of principle to ensure confidentiality of communications and related traffic data”, it “must, in accordance with the Court’s settled case-law, be interpreted strictly” (§89). The Court adds, using strong language:

“That provision cannot, therefore, permit the exception to that obligation of principle and, in particular, to the prohibition on storage of data, laid down in Article 5 of Directive 2002/58, to become the rule, if the latter provision is not to be rendered largely meaningless” (§89).

In any case, the Court adds, all exceptions adopted pursuant to Article 15(1) of the ePrivacy Directive must be in accordance with the general principles of EU law, which include the fundamental rights guaranteed by the Charter (§91) and must strictly have one of the objectives enumerated in Article 15(1) of the ePrivacy Directive (§90).

As for the second derogation to the principle, the Court looks at recitals 22 and 26 of the ePrivacy Directive and affirms that the retention of traffic data is permitted “only to the extent necessary and for the time necessary for the billing and marketing of services and the provision of value added services. (…) As regards, in particular, the billing of services, that processing is permitted only up to the end of the period during which the bill may be lawfully challenged or legal proceedings brought to obtain payment. Once that period has elapsed, the data processed and stored must be erased or made anonymous” (§85).

(3) A”very far-reaching” and “particularly serious” interference

The Court observed that the national data retention laws at issue in the main proceedings “provides for a general and indiscriminate retention of all traffic and location data of all subscribers and registered users relating to all means of electronic communication, and that it imposes on providers of electronic communications services an obligation to retain that data systematically and continuously, with no exceptions” (§97).

The data retained is metadata and is described in detail in §98. The Court confirmed its assessment in DRI that metadata “taken as a whole, is liable to allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained, such as everyday habits, permanent or temporary places of residence, daily or other movements, the activities carried out, the social relationships of those persons and the social environments frequented by them” (§99). It also added that this data “provides the means (…) of establishing a profile of the individuals concerned, information that is no less sensitive, having regard to the right to privacy, than the actual content of communications” (§99).

The Court went further to emphasise that this kind of undiscriminating gathering of data represents a “very far-reaching” and “particularly serious” interference in the fundamental rights to private life and protection of personal data (§100). Moreover, “he fact that the data is retained without the subscriber or registered user being informed is likely to cause the persons concerned to feel that their private lives are the subject of constant surveillance” (§100).

The Court indicates that such a far-reaching interference can only be justified by the objective of fighting serious crime (§102). And even in this case, the objective of fighting serious crime does not justify in itself “general and indiscriminate retention of all traffic and location data” (§103). The measures must, in addition, be strictly necessary to achieve this objective (§106).

The Court found that the national legislation such as that at issue in the main proceedings does not comply with this request, because (§105):

  • it “covers, in a generalised manner, all subscribers and registered users and all means of electronic communication as well as all traffic data, provides for no differentiation, limitation or exception according to the objective pursued”.
  • “It is comprehensive in that it affects all persons using electronic communication services, even though those persons are not, even indirectly, in a situation that is liable to give rise to criminal proceedings”.
  • It “applies even to persons for whom there is no evidence capable of suggesting that their conduct might have a link, even an indirect or remote one, with serious criminal offences”.
  • “it does not provide for any exception, and consequently it applies even to persons whose communications are subject, according to rules of national law, to the obligation of professional secrecy”.

(4) Targeted data retention is permissible. Here is a list with all conditions:

The Court spells out that fundamental rights do not prevent a Member State from adopting “legislation permitting, as a preventive measure, the targeted retention of traffic and location data, for the purpose of fighting serious crime, provided that the retention of data is limited, with respect to:

  • the categories of data to be retained,
  • the means of communication affected,
  • the persons concerned and
  • the retention period adopted, to what is strictly necessary” (§108).

In addition, such legislation must:

  • “lay down clear and precise rules governing the scope and application of such a data retention measure and imposing minimum safeguards, so that the persons whose data has been retained have sufficient guarantees of the effective protection of their personal data against the risk of misuse.
  • indicate in what circumstances and under which conditions a data retention measure may, as a preventive measure, be adopted, thereby ensuring that such a measure is limited to what is strictly necessary” §109().

Other conditions that need to be fulfilled for a data retention legislation to be considered compatible with fundamental rights are indicated directly or indirectly by the Court in further paragraphs.

Such legislation must:

  • be restricted to “retention in relation to data pertaining to a particular time period and/or geographical area and/or a group of persons likely to be involved, in one way or another, in a serious crime, or
  • persons who could, for other reasons, contribute, through their data being retained, to fighting crime” (§106).
  • “meet objective criteria, that establish a connection between the data to be retained and the objective pursued. In particular, such conditions must be shown to be such as actually to circumscribe, in practice, the extent of that measure and, thus, the public affected” (§110).
  • “be based on objective evidence which makes it possible to identify a public whose data is likely to reveal a link, at least an indirect one, with serious criminal offences, and to contribute in one way or another to fighting serious crime or to preventing a serious risk to public security” (§111).
  • “lay down clear and precise rules indicating in what circumstances and under which conditions the providers of electronic communications services must grant the competent national authorities access to the data. (…) a measure of that kind must be legally binding under domestic law” (§117).
  • “lay down the substantive and procedural conditions governing the access of the competent national authorities to the retained data” (§118).
  • provide that data must be “retained within the European Union” (§122).
  • provide for “the irreversible destruction of the data at the end of the data retention period” (§122).
  • must “ensure review, by an independent authority, of compliance with the level of protection guaranteed by EU law with respect to the protection of individuals in relation to the processing of personal data, that control being expressly required by Article 8(3) of the Charter” (§123).

Other specific conditions emerge with regard to access of competent authorities to the retained data. Access:

  • “can be granted, in relation to the objective of fighting crime, only to the data of individuals suspected of planning, committing or having committed a serious crime or of being implicated in one way or another in such a crime” (§119). [The Court refers here to the ECtHR cases of Zacharov and Szabo, after a long series of privacy related cases where it did not refer at all to the ECtHR case-law].
  • must be subject to “a prior review carried out either by a court or by an independent administrative body” (…) “the decision of that court or body should be made following a reasoned request by those authorities submitted, inter alia, within the framework of procedures for the prevention, detection or prosecution of crime” (§120). The only exception for the prior review are “cases of validly established urgency” (§120).
  • must be notified by authorities to the persons affected “under the applicable national procedures, as soon as that notification is no longer liable to jeopardise the investigations being undertaken by those authorities. That notification is, in fact, necessary to enable the persons affected to exercise, inter alia, their right to a legal remedy” (§121).
  • must be restricted solely to fighting serious crime (§125).

(5) Possible effects on the Privacy Shield and on PNR schemes

This judgment could have indirect effects on the “Privacy Shield” and slightly more immediate effects on Passenger Name Records schemes.

The indirect effect on the Privacy Shield and on all other adequacy schemes could only manifest in the context of a challenge of such transfer instruments before the CJEU. The seriousness with which the Court of Justice detailed all conditions that must be met by a legislative measure providing for a particular processing of personal data to be compliant with the fundamental rights to private life and to the protection of personal data strengthen the condition of “essentially equivalence”.

In other words, it will be difficult to convince the Court that a third country that allows collection of metadata (and all the more so content of communications) on a large scale and access to that data which is not made under the supervision of an independent authority, provides an adequate level of protection that would lawfully allow transfers of data from the EU to that third country. (For comparison, the CJEU referred to the Digital Rights Ireland case for 8 times and in key findings in its judgment in Schrems).

As for PNR schemes, the effects may come sooner and more directly, as we are waiting for the Court’s Opinion in Avis 1/15 on the compliance of the EU-PNR Canada agreement with fundamental rights. It is to be expected that the Court will copiously refer back to its new list of conditions for access by authorities to retained personal data when looking at how all PNR data is directly transferred by companies to law enforcement authorities in a third country, with no limitations.

***

Find what you’re reading useful? Please consider supporting pdpecho.

EU Commission’s leaked plan for the data economy: new rules for IoT liability and sharing “non-personal data”

It seems that it’s the season of EU leaks on internet and digital policy. One day after the draft new e-Privacy regulation was leaked (to Politico), another document appeared online (published by Euractiv) before its adoption and release – a Communication from the European Commission on “Building a European data economy”.

It announces at least two revisions of existing legal acts: the Database Copyright Directive (96/9) and the Product Liability Directive (85/374). New legislative measures may also be needed to achieve the objectives announced in the draft Communication. However, the Commission is not clear about this and leaves a lot of the decision-making for after the results of wide stakeholder and public consultations are processed.

The common thread of most of the policy areas covered by the Communication is “non-personal data”. The Commission starts from the premise that while the GDPR allows for the free movement of personal data within the EU, there are currently no common rules among Member States for sharing, accessing, transferring “non-personal data”. Moreover, the Commission notes that the number of national measures for data localisation is growing.

“The issue of the free movement of data concerns all types of data: enterprises and actors in the data economy deal with a mixture of personal and non-personal data, machine generated or created by individuals, and data flows and data sets regularly combine these different types of data”, according to the draft Communication.

And what is truly challenging is that “enterprises and actors in the data economy will be dealing with a mixture of personal and non-personal data; data flows and datasets will regularly combine both. Any policy measure must take account of this economic reality”.

If you are wondering what is meant by “non-personal data”, the draft Communication provides some guidance to understand what it refers to. For instance, the draft Communication mentions that “personal data can be turned into non-personal data through the process of anonymisation” and that “the bulk of machine-generated data are not personal data”. Therefore, anonymisation and de-identification techniques will gain even more importance.

While the GDPR covers how personal data are used in the EU, the proposals that will be made on the basis of this Communication envisage the use of all the other data.

So what does the Commission propose?

Several objectives are announced, most of them dealing with the free flow of and access to “non-personal data”, while another objective looks at reforming liability rules to accommodate algorithms, Artificial Intelligence and the Internet of Things.

Free flow of and access to non-personal data

  • According to the draft Communication, any Member State action affecting data storage or processing should be guided by a ‘principle of free movement of data within the EU’.
  • Broader use of open, well-documented Application Programming Interfaces (APIs) could be considered, through technical guidance, including identification and spreading of best practice for companies and public sector bodies.
  • The Commission could issue guidance based on the Trade Secrets Directive, copyright legislation and the Database Directive on how data control rights should be addressed in contracts. The Commission intends to launch the review of the Database Directive in 2017.
  • Access for public interest purposes – public authorities could be granted access to data where this would be in the general interest and would considerably improve the functioning of the public sector, for example access for statistical offices to business data or the optimization of traffic management systems on the basis of real-time data from private vehicles.
  • Selling and acquiring databases could be regulated. “Access against remuneration”: a framework based on fair, non-discriminatory terms could be developed for data holders, such as manufacturers, service providers or other parties, to provide access to the data they hold against remuneration. The Communication is not clear whether this proposal could also cover personal data. In any case, on several occasions throughout the draft Communication, it is mentioned or implied that the GDPR takes precedence over any new rules that would impact the protection of personal data.
  • A data producer’s right to use and licence the use of data could be introduced; by “data producer”, COM understands “the owner or long-term user of the device”. This approach would “open the possibility for users to exploit their data and thereby contribute to unlocking machine-generated data”.
  • Developing further rights to data portability (building on the GDPR data portability right and on the proposed rules on contract for the supply of digital content, further rights to portability of non-personal data could be introduced). The initiatives for data portability would be accompanied by sector specific experiments on standards (which would involve a multi-stakeholder collaboration including standard setters, industry, the technical community, and public authorities).

Rethinking liability rules for the IoT and AI era

Even though Artificial Intelligence is not mentioned as such in the draft Communication, it is clear that the scenario of algorithms making decisions is also envisaged by the announced objective to reform product liability rules, alongside IoT. As the draft Communication recalls, currently, the Products Liability Directive establishes the principle of strict liability, i.e. liability without fault: where a defective product causes damage to a consumer, the manufacturers may be liable even without negligence or fault on their part. The current rules are only addressed to the producer, always require a defect and that the causality between the defect and the damage has to be proven.

The Commission proposed two approaches, which will be subject to consultation:

  • “Risk-generating or risk-management approaches: liability would be assigned to the market players generating a major risk for others and benefitting from the relevant device, product or service or to those which are best placed to minimize or avoid the realization of the risk.”
  • Voluntary or mandatory insurance schemes: they would compensate the parties who suffered the damage; this approach would need to provide legal protection to investments made by business while reassuring victims regarding fair compensation or appropriate insurance in case of damage.”

“Connected and automated driving” – used as test case

The Commission intends to test all the proposed legal solutions, after engaging in wide consultations, in a real life scenario and proposes “connected and automated driving” as the test case.

Finally, read all of these objectives and proposals having in mind that they come from a draft document that was leaked to Euractiv. It is possible that by the time of adoption and publication of this Communication (and there is no indication as to when it will be officially published) its content will be altered.

***

Find what you’re reading useful? Please consider supporting pdpecho.

What’s new in research: Georgetown Law Technology Review, human rights and encryption, and data protection proof free-trade agreements (open access)

I’m starting this week’s “What’s new in research” post with three good news:

  • There is a new technology law journal in town – Georgetown Law Technology Review, which was just launched. It provides full access to its articles, notes and comments. “Few issues are of greater need for careful attention today than the intersection of law and technology“, writes EPIC’s Marc Rotenberg welcoming the new Review.
  • Tilburg Institute for Law, Technology and Society (TILT) launched its Open call for Fellowships Applications for the 2017-2018 academic year. “This programme is for internationally renowned senior scholars who wish to spend the 2017- 2018 academic year, or a semester, in residence at TILT as part of its multi-disciplinary research team to work on some of the most interesting, challenging and urgent issues relating to emerging and disruptive technologies.” I spent three months at TILT in 2012, as a visiting researcher, during my PhD studies. I highly recommend this experience – it’s one of the best environments there are to develop your research in the field of data protection/privacy.

 

livres4

As for the weekend reads proposed this week, they tackle hot topics: human rights and encryption from a global perspective, international trade agreements and data protection from the EU law perspective, newsworthiness and the protection of privacy in the US.  

 

  1. Human rights and encryption, by Wolfgang Schultz and Joris van Hoboken, published by UNESCO.

“This study focuses on the availability and use of a technology of particular significance in the field of information and communication: encryption, or more broadly cryptography. Over the last decades, encryption has proven uniquely suitable to be used in the digital environments. It has been widely deployed by a variety of actors to ensure protection of information and communication for commercial, personal and public interests. From a human rights perspective, there is a growing recognition that the availability and deployment of encryption by relevant actors is a necessary ingredient for realizing a free and open internet. Specifically, encryption can support free expression, anonymity, access to information, private communication and privacy. Therefore, limitations on encryption need to be carefully scrutinized. This study addresses the relevance of encryption to human rights in the media and communications field, and the legality of interferences, and it offers recommendations for state practice and other stakeholders.”

2. “Trade and Privacy: Complicated Bedfellows? How to Achieve Data Protection-Proof Free Trade Agreements“, by Kristina Irion, Svetlana Yakovleva, Marija Bartl, a study commissioned by the European Consumer Organisation/Bureau Européen des Unions de Consommateurs (BEUC), Center for Digital Democracy (CDD), The Transatlantic Consumer Dialogue (TACD) and European Digital Rights (EDRi).

“This independent study assesses how EU standards on privacy and data protection are safeguarded from liberalisation by existing free trade agreements (the General Agreement of Trade in Services (GATS) and the Comprehensive Economic and Trade Agreement (CETA)) and those that are currently under negotiation (the Trans-atlantic Trade and Investment Partnership (TTIP) and the Trade in Services Agreement (TiSA)). Based on the premise that the EU does not negotiate its privacy and data protection standards, the study clarifies safeguards and risks in respectively the EU legal order and international trade law. In the context of the highly-charged discourse surrounding the new generation free trade agreements under negotiation, this study applies legal methods in order to derive nuanced conclusions about the preservation of the EU’s right to regulate privacy and the protection of personal data.”

3. “Making News: Balancing Newsworthiness and Privacy in the Age of Algorithms, by Erin C. Caroll, published by the Georgetown University Law Center.

“In deciding privacy lawsuits against media defendants, courts have for decades deferred to the media. They have given it wide berth to determine what is newsworthy and so, what is protected under the First Amendment. And in doing so, they have often spoken reverently of the editorial process and journalistic decision-making.

Yet, in just the last several years, news production and consumption has changed dramatically. As we get more of our news from digital and social media sites, the role of information gatekeeper is shifting from journalists to computer engineers, programmers, and app designers. The algorithms that the latter write and that underlie Facebook, Twitter, Instagram, and other platforms are not only influencing what we read but are prompting journalists to approach their craft differently.

While the Restatement (Second) of Torts says that a glance at any morning newspaper can confirm what qualifies as newsworthy, this article argues that the modern-day corollary (which might involve a glance at a Facebook News Feed) is not true. If we want to meaningfully balance privacy and First Amendment rights, then courts should not be so quick to defer to the press in privacy tort cases, especially given that courts’ assumptions about how the press makes newsworthiness decisions may no longer be accurate. This article offers several suggestions for making better-reasoned decisions in privacy cases against the press.”

Enjoy the reads and have a nice weekend!

***

Find what you’re reading useful? Please consider supporting pdpecho.

 

 

 

 

Greek judges asked the CJEU if they should dismiss evidence gathered under the national law that transposed the invalidated Data Retention Directive

Here is a new case at the Court of Justice of the EU that the data protection world will be looking forward to, as it addresses questions about the practical effects of the invalidation of the Data Retention Directive.

old_bailey_microcosm

(licensed under Creative Commons)

Case C-475/16 K. (yes, like those Kafka characters) concerns criminal proceedings against K. before Greek courts, which apparently involve evidence gathered under the Greek national law that transposed the now-invalidated Data Retention Directive. The Directive was invalidated in its entirety by the CJEU in 2014, after the Court found in its Digital Rights Ireland judgment that the provisions of the Directive breached Articles 7 (right to respect for private life) and 8 (right to the protection of personal data) of the Charter of Fundamental Rights.

The Greek judges sent in August a big set out questions for a preliminary ruling to the CJEU (17 questions). Among those, there are a couple of very interesting ones, because they deal with the effects in practice of the invalidation of an EU Directive and what happens with national laws of the Member States that transposed the Directive.

For instance, the national judge asks whether national courts are obliged not to apply legislative measures transposing the annulled Directive and whether this obligation also means that they must dismiss evidence obtained as a consequence of those legislative measures (Question 3). The national judge also wants to know if maintaining the national law that transposes an invalidated Directive constitutes an obstacle to the establishment and functioning of the internal market (Question 16).

Another question raised by the national judge is whether the national legislation that transposed the annulled Data Retention Directive and that remained in force at national level after the annulment is still considered as falling under the scope of EU law (Question 4). The answer to this question is important because the EU Charter and the supremacy of EU law do not apply to situations that fall outside the scope of EU law.

The Greek judge didn’t miss the opportunity to also ask about the effect on the national law transposing the Data Retention Directive of the fact that this Directive was also enacted to implement a harmonised framework at the European level under Article 15(1) of the ePrivacy Directive (Question 5). The question is whether this fact is enough to bring the surviving national data retention laws under the scope of EU law.

As long as the Charter will be considered applicable to the facts of the case, the national judge further wants to know whether national law that complies partly with the criteria set out in the Digital Rights Ireland decision still breaches Articles 7 and 8 of the Charter because it doesn’t comply with all of it (Question 13). For instance, the national judge estimates that the national law doesn’t comply with the request that the persons whose data are retained must be at least indirectly in a situation which is liable to give rise to criminal prosecutions (para 58 DRI), but it complies with the request that the national law must contain substantive and procedural conditions for the access of competent authorities to the retained data and objective criteria by which the number of persons authorised to access these data is limited to what is strictly necessary (paras 61, 62 DRI).

Lastly, it will be also interesting to see whether the Court decides to address the issue of what “serious crime” means in the context of limiting the exercise of fundamental rights (Questions 10 and 11).

If you would like to dwell into some of these topics, have a look at the AG Opinion in the Tele2Sverige case, published on 19 July 2016. The judgment in that case is due on 21 December 2016. Also, have a look at this analysis of the Opinion.

As for a quick “what to expect” in the K. case from my side, here it is:

  • the CJEU will seriously re-organise the 17 questions and regroup them in 4 to 5 topics, also clarifying that it only deals with the interpretation of EU law, not national law or facts in national proceedings;
  • the national laws transposing the Data Retention Directive will probably be considered as being in the field of EU law – as they also regulate within the ambit of the ePrivacy Directive;
  • the Court will restate the criteria in DRI and probably clarify that all criteria must be complied with, no exceptions, in order for national measures to comply with the Charter;
  • the CJEU will probably not give indications to the national courts on whether they should admit or dismiss evidence collected on the bases of national law that does not comply with EU law – it’s too specific and the Court is ‘in the business’ of interpreting EU law; the best case scenario, which is possible, is that the Court will give some guidance on the obligations of Member States (and hopefully their authorities) regarding the effects of their transposing national laws when relevant EU secondary law is annulled;
  • as for what “serious crime” means in the context of limiting fundamental rights, let’s see about that. Probably the Court will give useful guidance.

***

Find what you’re reading useful? Please consider supporting pdpecho.

Even if post Brexit-UK adopts the GDPR, it will be left without its “heart”

Gabriela Zanfir Fortuna

brexit

There has been lately a wave of optimism of those looking for legal certainty that the GDPR will be adopted by the UK even after the country leaves the European Union. This wave was prompted by a declaration of the British Secretary of State, Karen Bradley, at the end of October, when she stated before a Committee of the Parliament that “We will be members of the EU in 2018 and therefore it would be expected and quite normal for us to opt into the GDPR and then look later at how best we might be able to help British business with data protection while maintaining high levels of protection for members of the publicThe information commissioner of the UK, Elisabeth Denham, welcomed the news. On another hand, as Amberhawk explained in detail, this will not mean that the UK will automatically be considered as ensuring an adequate level of protection.

The truth is that as long as the UK is still a Member of the EU, it can’t opt in or opt out, for that matter, from regulations (other than the ones subject to the exemptions negotiated by the UK when it entered the Union – but this is not the case for the GDPR). They are “binding in their entirety” and “directly applicable”, according to Article 288 of the Treaty on the Functioning of the EU. So, yes, quite normally, if the UK is still a Member State of the EU on 25 May 2018, then the GDPR will start applying in the UK just as it will be applying in Estonia or France.

The fate of the GDPR after Brexit becomes effective will be as uncertain as the fate of all other EU legislative acts transposed in the UK or directly applicable in the UK. But let’s imagine the GDPR will remain national law after Brexit, in a form or another. If this happens, it is likely that it will take a life of its own, departing from harmonised application throughout the EU. First and foremost, the GDPR in the UK will not be applied in the light of the Charter of Fundamental Rights of the EU and especially its Article 8 – the right to the protection of personal data. The Charter played an extraordinary role in the strengthening of data protection in the EU after it became binding, in 2009, being invoked by the Court of Justice of the EU in its landmark judgments – Google v Spain,  Digital Rights Ireland and Schrems.

The Court held as far back as 2003 that “the provisions of Directive 95/46, in so far as they govern the processing of personal data liable to infringe fundamental freedoms, in particular the right to privacy, must necessarily be interpreted in the light of fundamental rights” (Österreichischer Rundfunk, para 68). This principle was repeated in most of the following cases interpreting Directive 95/46 and other relevant secondary law for this field, perhaps with the most notable results in Digital Rights Ireland and Schrems. 

See, for instance:

“As far as concerns the rules relating to the security and protection of data retained by providers of publicly available electronic communications services or of public communications networks, it must be held that Directive 2006/24 does not provide for sufficient safeguards, as required by Article 8 of the Charter, to ensure effective protection of the data retained against the risk of abuse and against any unlawful access and use of that data” (Digital Rights Ireland, para. 66).

“As regards the level of protection of fundamental rights and freedoms that is guaranteed within the European Union, EU legislation involving interference with the fundamental rights guaranteed by Articles 7 and 8 of the Charter must, according to the Court’s settled case-law, lay down clear and precise rules governing the scope and application of a measure and imposing minimum safeguards, so that the persons whose personal data is concerned have sufficient guarantees enabling their data to be effectively protected against the risk of abuse and against any unlawful access and use of that data. The need for such safeguards is all the greater where personal data is subjected to automatic processing and where there is a significant risk of unlawful access to that data” (Schrems, para. 91).

Applying data protection law outside the spectrum of fundamental rights will most likely not ensure sufficient protection to the person. While the UK will still remain under the legal effect of the European Convention of Human Rights and its Article 8 – respect for private life – this by far does not equate to the specific protection ensured to personal data by Article 8 of the Charter as interpreted and applied by the CJEU.

Not only the Charter will not be binding for the UK post-Brexit, but the Court of Justice of the EU will not have jurisdiction anymore on the UK territory (unless some sort of spectacular agreement is negotiated for Brexit). Moreover, EU law will not enjoy supremacy over national law, as there is the case right now. This means that the British data protection law will be able to depart from the European standard (GDPR) to the extent desirable by the legislature. For instance, there will be nothing staying in the way of the British legislature to adopt permissive exemptions to the rights of the data subject, pursuant to Article 23 GDPR.

So when I mentioned in the title that the GDPR in the post-Brexit UK will in any case be left without its “heart”, I was referring to its application and interpretation in the light of the Charter of the Fundamental Rights of the EU.

***

Find what you’re reading useful? Please consider supporting pdpecho.

Interested in the GDPR? See the latest posts:

CNIL just published the results of their GDPR public consultation: what’s in store for DPOs and data portability? (Part I)

CNIL’s public consultation on the GDPR: what’s in store for Data Protection Impact Assessments and certification mechanisms? (Part II)

The GDPR already started to appear in CJEU’s soft case-law (AG Opinion in Manni)

CNIL’s public consultation on the GDPR: what’s in store for Data Protection Impact Assessments and certification mechanisms? (Part II)

Gabriela Zanfir Fortuna

The French Data Protection Authority, CNIL, made public last week the report of the public consultation it held between 16 and 19 July 2016 among professionals about the General Data Protection Regulation (GDPR). The public consultation gathered 540 replies from 225 contributors.

The main issues the CNIL focused on in the consultation were four:

  • the data protection officer;
  • the right to data portability;
  • the data protection impact assessments;
  • the certification mechanism.

These are also the four themes in the action plan of the Article 29 Working Party for 2016.

This post summarises the results and action plan for the last two themes. If you want to read about the results on the data protection officer and the right to data portability, check out Part I of this post. [Disclaimer: all quotations are translated from French].

1) On data protection impact assessments (DPIAs)

Article 35 GDPR obliges data controllers to carry out an assessment of the impact of the envisaged processing operations on the protection of personal data prior to the processing, if it is likely to result in a high risk to the rights and freedoms of natural persons, taking into account the nature, scope, context and purposes of the processing, and in particular where that processing uses new technologies. According to Article 35(3), the supervisory authorities must make public a list of the kind of processing operations which are subject to this requirement.

Article 35(3) provides that there are three cases where DPIAs must be conducted:

a) a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing (including profiling);

b) where processing on a large scale sensitive data (e.g. health data, data disclosing race, political options etc.);

c) a systematic monitoring of a publicly accessible area on a large scale.

According to the report, the DPIA emerges as a dynamic compliance tool, which contributes to maintain data security, reduce the risks of processing, determine the suitable safeguards, prevent legal deficiencies and better implement Privacy by Design and Privacy by Default (p. 17). It was deemed by participants as a “new and useful tool”.

There were three main category of questions raised by the participants to the public consultation:

  • When do controllers have to conduct a DPIA?
  • How to conduct a DPIA?
  • Who does what within the work for a DPIA?

The respondents requested the supervisory authority to be active in helping them prepare for DPIAs – “to clarify everything that is unclear, to involve companies [in discussions], to provide criteria and examples” and to help harmonise the criteria at European level (p. 14).

Several particular cases were brought up by the respondents, such as processing of HR data, processing of data by websites, processing of data by public administration or by hospitals. These scenarios raised questions such as: does the term “large scale” only refer to Big Data? Does it refer to the volume of data that will be processed or to the number of people whose data will be processed? Are “new technologies” all the technologies that are used for the first time by a controller? Is behavioural advertising “profiling” in the sense of the GDPR? (p. 14).

The participants also wanted to know whether a DPIA should be conducted as well for those processing operations that are already in place and that would qualify for one of the “compulsory” cases that require a DPIA.

As for the methodological approach, the respondents asked for a simple method. They also referred to other existent tools that could be used, such as ISO 29134 and EBIOS. In any case, they suggested that the method should be tested with controllers and should be harmonised at European level. There were also questions whether professional associations could create their own methodology for DPIAs based on sectors of activity (p. 15).

The conclusion of the CNIL was that the contributions to the public consultation showed a great need for clarification, but also revealed “interesting ideas” for the implementation of the DPIA requirements, which will be taken into account. The most difficult points revealed are the criteria to be taken into account when deciding if a DPIA must be conducted, the harmonisation of methodologies at European level and the prior consultation of supervisory authorities (p. 17).

The immediate action plan refers to guidance from the Article 29 Working Party on DPIA and what constitutes “high risks”, which will provide interpretations to vague requirements. The CNIL also aims to make some steps by themselves, such as updating their current guidance for Privacy Impact Assessments.

4) On the certification mechanism

Article 42 of the GDPR provides that the establishment of data protection certification mechanisms and of data protection seals and marks, for the purpose of demonstrating compliance with the Regulation of processing operations by controllers and processors shall be “encouraged” by Member States, DPAs, the European Data Protection Board and the European Commission. Article 42(3) clarifies that the certification is voluntary and must be available via a transparent process.

Surprisingly, the “certification” part of the public consultation was the one that provided more plain suggestions than questions, compared to the other three, as is apparent from the report. On another hand, the contributions seem to be smaller in volume, provided this indeed is a novel topic for the data protection world.

One of the questions dealt with in the consultation was “who should issue certifications/labels”? The respondents preferred the option of a certification issued at European level and only in the absence of such a possibility, a certification issued at national level that should be mutually recognised. They also underlined that the coexistence of certifications issued by DPAs and certifications issued by certification bodies will be difficult. Participants to the consultation suggested that drafting of standards should be carried out by regulators in consultation with companies and the future evaluators, with a view to homogenise the practices of the different certification bodies (p. 11).

To the question of what should be certified or labeled with priority, the respondents provided a list of suggestions (p. 11):

  • online products and services processing health data;
  • the solutions to monitor/surveil databases;
  • the services provided by the state;
  • anonymisation techniques;
  • search engines;
  • social media platforms.

As to which are the specific needs of small and medium enterprises, the replies referred to support for filing the requests for certification, the need of reduced costs and the need of a simple methodology (p. 12).

Another topic discussed was how to retrieve a label or a certification in case of misconduct – proposals ranged from creating an “alarm system” to signal non-compliance with the certification, to having an effective withdrawal after an adversarial procedure with a formal notice to the certification body, which could propose a corrective plan during the procedure (p. 12).

Finally, the point that certification under Article 42 GDPR should essentially focus on data protection and not data security was also raised (p. 13).

The report does not contain an action plan for certification.

***

Find what you’re reading useful? Please consider supporting pdpecho.