Here’s how Internet’s inventor wants to reinvent it and why this is great news for privacy

Last May I had the chance to meet Prof. Tim Berners-Lee and one of the lead researchers in his team at MIT, Andrei Sambra, when I accompanied Giovanni Buttarelli, the European Data Protection Supervisor, in his visit at MIT.

Andrei presented then the SOLID project, and we had the opportunity to discuss about it with Prof. Berners-Lee, who leads the work for SOLID. The project “aims to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy.” In other words, the researchers want to de-centralise the Internet.

“Solid (derived from “social linked data”) is a proposed set of conventions and tools for building decentralized social applications based on Linked Data principles. Solid is modular and extensible and it relies as much as possible on existing W3C standards and protocols”, as explained on the project’s website.

Andrei explains in a blog post that, in a first step, the project finds solutions “to decouple the applications from the data they produce, and then to decouple the data from the actual storage server.”

“This means that applications and servers are interchangeable, and they can be swapped without impacting the most important part – your data. It’s all about freedom of choice.” (Read the entire explanation in this blog post)

I was so excited to find out about the efforts conducted by Prof. Berners-Lee and his team. At the end of the presentation and the discussion, I asked, just to make sure I understood it correctly: “Are you trying to reinvent the Internet?”. And Prof. Berners-Lee replied, simply: “Yes”. A couple of weeks later I saw this article in the New York Times: “The Web’s creator looks to reinvent it” So I did understand correctly🙂

But why was I so excited? Because I saw first hand that some of the greatest minds in the world are working to bring back control to the individual on the Internet. Some of the greatest minds in the world are not giving up on privacy, irrespective of how many “Privacy is dead” books and articles are published, irrespective of how public and private policymakers, lobbyists and Courts understand at this moment in history the value of privacy and of what Andrei called “freedom of choice” in the digital world.

I was excited because I found out about a common goal us, the legal privacy bookworms/occasional policymakers, and the IT masterminds have: empower the ‘data subject’, the ‘user’, well, the human being, in the new Digital Age, put them back in control and curtail unnecessary invasions of privacy for all kind of purposes (profit making to security).

In fact, my entire PhD thesis was built on the assumption that the rights of the data subject, as they are provided in EU law (rights to access, to erase, to object, to be informed, to oppose automated decision making) are all prerogatives of the individual that aim to give control to the individual over his or her data. So if technical solutions are developed for this kind of control to be practical and effective, I am indeed excited about it!

I also realised that some of the provisions that survived incredible, multifaceted opposition to make it to the new General Data Protection Regulation are in fact tenable, like the right to data portability (check out Article 20 of the GDPR, here).

This is why, when I saw that today the world celebrates 25 years since the Internet went public, I remembered this moment in May and I wanted to share it with you. Here’s to a decentralised Internet!

Later Edit: The man itself says August 23 is not exactly accurate. Nor 25 years! In any case, it was still a good day for me to think about all of the above and share it with you🙂

IMG_7391

Accessing content of emails – the 2nd Californian Gmail case. A summary and some post scriptum thoughts

Yesterday I stumbled upon the ‘Order denying the motion to dismiss as to the merits of plaintiff’s claims’, issued by the US District Court – Northern District of California on 12 August 2016 in the case of Matera v. Google.  The order allows the trial against Google to move forward.

This is the second case brought in front of the Californian Court alleging that Google’s practice to scan the content of emails sent through its Gmail service violates US wiretap laws. The first try was not successful because the plaintiffs could not constitute a ‘class’ (there’s a short history of the first case recalled in the Order). What’s interesting is that most of the findings in this Order are in fact re-statements of findings from the previous case. And with this case moving forward for now, there’s a chance we’ll see a real assessment of the facts of the case and an actual Court decision in the end.

Now, the plaintiff seeks to represent non-Gmail users ‘who have never established an email account with Google, and who have sent emails to or received emails from individuals with Google email accounts’.

‘Google allegedly intercepted the emails for the dual purposes of (1) providing advertisements targeted to the email’s recipient or sender, and (2) creating user profiles to advance Google’s profit interests.’

According to the plaintiff, Google utilizes the user profiles ‘for purposes of selling to paying customers, and sending to the profiled communicants, targeted advertising based upon analysis of these profiles’ (p. 3).

Google defends itself by stating, among other things, that this practice is a part of their ‘ordinary course of business’ and therefore it falls under an exception of the Wiretap Law that allows them to look at the content of communications.

I read the Order with the mind of an EU data protection lawyer that was part of the team assessing the EU-US Privacy Shield for the Article 29 Working Party and this is a list of findings that caught my eye:

  1. The Court found it plausible that the use of data to target ads is not ‘routine and legitimate commercial behaviour’ that is part of Google’s ordinary course of business, so it’s not exempted under the Wiretap Act.
  • The Court reiterated that it stands by the findings in Gmail I, according to which ‘the ordinary course of business exception protected electronic communication service providers from liability where the interceptions facilitated or were incidental to provision of the electronic communication service at issue‘ (p. 11).
  • In other words, the Court concluded that there ‘must be some nexus between the need to engage in the alleged interception and the provider’s ultimate business, that is, the ability to provide the underlying service or good‘ (p. 12).
  • Otherwise, the Court explained, ‘an electronic communication service provider could claim that any activity routinely undertaken for a business purpose is within the ordinary course of business, no matter how unrelated the activity is to the provision of the electronic communication service‘ (p. 15).
  • The Court further restated an argument of Chief Judge Hamilton from a previous case, who noted that ‘it is untenable for electronic communication service providers to ‘self-define’ the scope of their exemption from Wiretap Act liability‘.
  • Google used this following argument: ‘the alleged interception of email enables Google to provide targeted advertising, which in turn generates the revenue necessary for Google to provide Gmail. Google further contends that “the use of data to target ads is routine and legitimate commercial behavior”‘ (p. 24).
  • The Court in fact found that, because Google ceased intercepting and analysing the contents of emails transmitted via Google Apps for Education, ‘Google is able to provide the Gmail service to at least some users without intercepting, scanning and analyzing the content of email for advertising purposes’ (p. 24).

2)  Google claims that California’s Invasion of Privacy Act does not apply to email and does not apply to new technologies in general. The Court is ‘unpersuaded’ by these claims and follows California Supreme Court’s philosophy according to which, when faced with two possible interpretations of CIPA, the CSC construes CIPA ‘in accordance with the interpretation that provides the greatest privacy protection’. 

  • Section 631 of the California Penal Code creates liability for any individual who ‘reads, or attempts to read or to learn the contents or meaning of any message, report or communication while the same is in transit or passing over any wire, line or cable, or is being sent from or received at any place within this state‘.
  • There is also a Section 630, according to which:

The Legislature hereby declares that advances in science and technology have led to the development of new devices and techniques for the purpose of eavesdropping upon private communications and that the invasion of privacy resulting from the continual and increasing use of such devices and techniques has created a serious threat to the free exercise of personal liberties and cannot be tolerated in a free and civilized society.

The Legislature by this chapter intends to protect the right of privacy of the people of this state.

  • The Court refers to the California Supreme Court’s findings in Flanagan v. Flanagan that ‘In enacting [CIPA], the Legislature declared in broad terms its intent to protect the right of privacy of the people of this state from what it perceived as a serious threat to the free exercise of personal liberties that cannot be tolerated in a free and civilized society. This philosophy appears to lie at the heart of virtually all the decisions construing [CIPA]’ (p.33). (Flanagan v. Flanagan, 27 Cal. 4th 766, 775 (2002)).
  • Replying to Google’s claim that CIPA cannot refer to emails, as emails did not exist at the time CIPA was adopted, the Court quotes again the California Supreme Court, which regularly reads statutes to apply to new technologies where such a reading would not conflict with the statutory scheme (p. 34, 35):

“Fidelity to legislative intent does not ‘make it impossible to apply a legal text to technologies that did not exist when the text was created. . . . Drafters of every era know that technological advances will proceed apace and that the rules they create will one day apply to all sorts of circumstances they could not possibly envision.” (Apple Inc. v. Superior Court, 56 Cal. 4th 128, 137 (2013))

  • Finally, the Court refers to two other courts in its district that already decided to apply Section 631 of CIPA to ‘electronic communications similar to email’ (re Facebook Internet Tracking Litig., 140 F. Supp. 3d at 936 – holding that section 631 applies to “electronic communications”; Campbell, 77 F. Supp. 3d at 848 – finding that plaintiffs stated a claim under section 631 when defendant allegedly intercepted online Facebook messages). (p. 37).

Some post scriptum thoughts:

When the Court says there must be a ‘nexus between the need to engage in the alleged interception and the provider’s ultimate business, that is, the ability to provide the underlying service or good’ for the interception of the content of the communications to be lawful, it’s almost like the Court would be saying that the interception must be ‘strictly necessary’ for the ordinary course of business (for a background of the ‘necessity’ condition in EU data protection law, click HERE).

If we were to transpose this case into the realm of EU law, we may not even get to think whether the interception (which amounts to an interference) is strictly necessary or not to achieve a purpose ‘allowed’ by the applicable law. The main question that would be there to answer is ‘does accessing all the content of emails and profiling all the users for marketing purposes touches the essence of the fundamental right to private life and that of the fundamental right to data protection?’.

There are very good chances the answer would be ‘yes’…

Why (I think) the WP29 Statement on the Privacy Shield is not really a ‘carte blanche’ for one year

The Plenary of the Article 29 Working Party (composed of national Data Protection Authorities – DPAs – in Europe and the European Data Protection Supervisor) met on 26 July to discuss, among other topics, the adopted text of the EU-US Privacy Shield and its accompanying adequacy decision issued by the European Commission  on 12 July.

The Group adopted a Statement concerning its assessment of the adopted version of the Privacy Shield. To make a long story short, WP29 issued an Opinion on the Privacy Shield  on 13 April, containing concerns, some of which outstanding, about the level of protection afforded by the Privacy Shield to personal data transferred from the EU to the U.S.. This, together with a later Opinion issued by the European Data Protection Supervisor, prompted the Commission to go back to the negotiation table with representatives of the U.S. government in order to alleviate these concerns. On 12 July, after passing through the vote of the Article 31 Committee, the final text of the Privacy Shield was adopted by the Commission.

The Statement issued by WP29 is meant to address the changes brought to the text of the Privacy Shield after the last rounds of negotiations. Have the two negotiating parties addressed the concerns raised by DPAs? Have they provided the requested clarifications?

WP29 stated that:

‘a number of these concerns remain regarding both the commercial aspects and the access by U.S. public authorities to data transferred from the EU.’

The WP29 statement is very brief – so the Group preferred not to launch in an extensive legal analysis of the changes brought to the text. This would have required more time and the benefits of a detailed analysis at this stage, after the text has just been adopted, are few. However, the messages are very clear in the one-pager statement and they are quite critical.

The DPAs highlight three key issues that were not solved regarding transfers in the commercial area (and they mention these three as an example, suggesting thus that there are more ‘concerns’ which have not been dealt with):

  • the lack of specific rules on automated decisions (profiling)
  • the lack of a general right to object
  • the fact that it remains unclear how the Privacy Shield Principles apply to processors

WP29 also refers to two issues that are not entirely solved regarding access by law enforcement to the transferred data:

  • the guarantees concerning the independence and the powers of the Ombudsperson mechanism are not strict enough
  • the lack of concrete assurances that such practice does not take place (while, at the same time, noting ‘the commitment of the ODNI not to conduct mass and indiscriminate collection of personal data’ – yes, collection and not use)

At least the two last points stand right at the essence of the right to personal data protection and, respectively, the right to respect for private life. The first one has the ability to trigger a breach of Article 8(3) of the Charter of EU (independence of supervisory authorities) and the second one could amount to ‘legislation permitting the public authorities to have access on a generalised basis to the content of electronic communications’. And, as the CJEU found, such legislation ‘must be regarded as compromising the essence of the fundamental right to respect for private life, as guaranteed by Article 7 of the Charter’ (para 94 of the Schrems judgement).

Moreover, even the former three identified points of concern could be understood as lacking to implement the general obligation to protect personal data from Article 8(1) of the Charter, were they to be analysed by a Court. (For a similar reasoning, but concerning the rules on international data transfers, see para 72 of the Schrems judgment.)

So, why do I think WP29 did not give a ‘carte blanche’ or a ‘green light’ for the application of the Privacy Shield?

First, because it is not in its competence to do so. According to Article 29(1) of Directive 95/46, the WP29 ‘shall have advisory status’. Article 30 of the Directive enumerates all the competences and powers of the Working Party – giving opinions, informing the Commission, issuing recommendations, advising the Commission. WP29 is not a Court. It is not even an administrative body that can deal with complaints and issue enforceable decisions to solve them. It cannot simply decide that a legal act issued by the European Commission (such as an adequacy decision) will be disapplied. Or, even more so, annulled.

The CJEU was more than clear in Schrems when stating that ‘the Court (of Justice of the EU – my addition) alone has jurisdiction to declare that an EU act, such as a Commission decision adopted pursuant to Article 25(6) of Directive 95/46, is invalid, the exclusivity of that jurisdiction having the purpose of guaranteeing legal certainty by ensuring that EU law is applied uniformly’ (para 61 of the judgment).

WP29 could not challenge the Privacy Shield in Court, either. It does not have this competence.

The ones that could indeed challenge the validity of the adequacy decision are the individual members of the Article 29 Working Party, the national DPAs – and only those whose national law gives them the legal standing to go to their national Courts (the others could also initiate such proceedings, if they would know how to directly invoke in front of the national courts the provisions of Directive 95/46 granting them this competence – third indent of Article 28(3); but this is another EU law discussion).

However, just as the CJEU points out in the Schrems judgment, court proceedings initiated by the DPAs are most likely to be possible only in situations where a complaint was made by an individual  (this also depends on national procedural laws of EU Member States) and the DPA happens to agree with the complainant.

‘where the national supervisory authority considers that the objections advanced by the person who has lodged with it a claim concerning the protection of his rights and freedoms in regard to the processing of his personal data are well founded, that authority must, in accordance with the third indent of the first subparagraph of Article 28(3) of Directive 95/46, read in the light in particular of Article 8(3) of the Charter, be able to engage in legal proceedings‘. (CJEU, para. 65 of Schrems)

Perhaps it is not a coincidence that the only concrete immediate step mentioned by the WP29 in its Statement is the commitment of its members to ‘proactively and independently assist the data subjects with exercising their rights under the Privacy Shield mechanism, in particular when dealing with complaints‘.

Another concrete step the WP29 can do about the level of protection of the safeguards contained in the Privacy Shield is, indeed, focusing on the first Joint Annual Review. The Review will probably be done at the beginning of Summer in 2017, close to the 1 year anniversary of its adoption – and it is the quickest way to have the adequacy decision of the Privacy Shield to be suspended or repealed (see paragraphs 150 and 151 of the adequacy decision), if it indeed does not provide for an adequate level of protection.

In the meantime, the members of the WP29 can very well use as guidance the complex analysis in the 58 pages of the Opinion on the draft Privacy Shield issued on 13 April when they will be dealing with complaints.

This is why I think that yesterday’s Statement is not the ‘carte blanche’ or ‘the green light’ almost everyone thought it was.

***

If you want to read more on the topic:

EU privacy watchdogs keep open mind on new U.S. data privacy pact (Reuters)

EU watchdogs permit Privacy Shield to run for one year (BBC)

EU Privacy Regulators Give Green Light to Data-Transfer Pact with U.S. (WSJ)

EU privacy watchdogs vow to thoroughly frisk Privacy Shield next year (Arstechnica)

Les gendarmes européens de la vie privée critiquent l’accord Privacy Shield (Le Monde)

EDPS issues guidelines on how to ensure confidentiality of whistleblowers

The European Data Protection Supervisor issued today (18 July 2016) Guidelines addressed to the EU institutions and bodies on how to deal with whistleblowers in a way that is compliant with the data protection requirements in Regulation 45/2001.

The first thing you need to know is that the EU Staff Regulations contain an obligation for staff members and other persons working for the EU institutions and bodies to report in writing any reasonable suspicion of illegal activities to the hierarchy or to the European Anti-Fraud Office (“OLAF”) directly.

EU institutions are required to manage whistleblowing reports and ensure the protection of personal information of the whistleblowers, the alleged wrongdoers, the witnesses and the other persons appearing in the report.

According to the EDPS, “the most effective way to encourage staff to report concerns is to ensure them that their identity will be protected. Therefore, clearly defined channels for internal and external reporting and the protection of the information received should be in place. The identity of the whistleblower who reports serious wrongdoings or irregularities in good faith should be treated with the utmost confidentiality as they should be protected against any retaliation”.

Here is a list with the main recommendations from the Guidelines:

1. Implement defined channels for internal and external reporting and specific rules where the purpose is clearly specified.

2. Ensure confidentiality of the information received and protect the whistleblowers’ identity and all other persons involved.

3. Apply the principle of data minimisation: only process personal information, which are adequate, relevant and necessary, for the particular case.

4. Identify what personal information means in this context and which are the affected individuals to determine their right of information, access and rectification. Restrictions to these rights are allowed, as long as the EU institutions are able to provide documented reasons before taking such a decision.

5. Apply the two-step procedure to inform each category of individuals concerned about how their data will be processed.

6. Ensure when responding to right of access requests that personal information of other parties is not revealed.

7. Assess the appropriate competence of the recipient (internal or external) and then limit the transfer of personal information only when necessary for the legitimate performance of tasks covered by the competence of the recipient.

8. Define proportionate conservation periods for the personal information processed within the scope of the whistleblowing procedure depending on the outcome of each case .

9. Implement both organisational and technical security measures based on a risk assessment analysis of the whistleblowing procedure in order to guarantee a lawful and secure processing of personal information.

ECHR, on the private life of third parties in the context of telephone tapping authorised by a judge

The European Court of Human Rights gave its judgment yesterday in Case Pruteanu v. Romania (Case 30181/05), which concerns the complaint of a lawyer whose conversations with a client were intercepted by prosecutors in the context of a criminal case. The client was not a part of the criminal case, but he was an associate of the accused persons. The recordings were used in the criminal trial, where neither the lawyer, nor his client, were a part. The lawyer wanted to challenge the legality of the interceptions and to require their deletion, but was not able to do so.

The facts of the case bring forward the issue of the extent that third parties whose telephone conversations are recorded following an authorisation to intercept them issued on the name of someone else, enjoy the right to private life under Article 8 of the European Convention of Human Rights.

The Court emphasises in this judgment that an “effective control”, be it a posteriori, of an authorisation to intercept issued by a judge, exercised by a third-party to the authorisation to intercept, is necessary in order to make the interception compliant with the right to private life of the third party.

Facts

“On 1 September 2004 the commercial company M. was barred from carrying out bank transactions. The police received several criminal complaints against the company for deceit. One of the company’s partners, C.I., instructed the applicant as his defence lawyer. On 24 September 2004 the District Court authorised the prosecuting authorities to intercept and record the partners’ telephone conversations for a period of thirty days.

From 27 September to 27 October 2004 the fraud investigation unit intercepted and recorded C.I.’s conversations, including twelve conversations with the applicant. On 21 March 2005 the District Court held that the recordings were relevant to the criminal case against C.I.’s fellow partners in company M., and ordered that the transcripts and the tapes be placed under seal. Mr Pruteanu and C.I. both lodged appeals, which were declared inadmissible” (Source).

Findings of the Court

After stating that any interception of a conversation is an interference in the right to private life, the Court analysed whether this interference is necessary in a democratic society.

The Court notes that “the authorisation to record the conversations of C.I. was given by a tribunal. Nevertheless, that authorisation targeted C.I. and not the applicant, in such a way that it cannot be concluded that the tribunal had examined a priori the necessity of the measure regarding the person concerned. Furthermore, the Court recalls that it already rejected the argument which lead to consider that the mere fact that the person who issues an order and supervises the interceptions is a magistrate implies, ipso facto, the lawfulness and the conformity of the interceptions with Article 8 of the Convention, such a reasoning making any remedy for the interested parties inoperative” (para. 50, my translation; the Court refers here to Matheron case, para. 40).

Further, the Court considers it has to examine “if the applicant had the possibility to appeal a posteriori the recordings in order to control them” (para. 51, my translation).

Analysing the legislation in force at the time of the facts, the Court concluded that the applicant did not have legal standing to intervene in the criminal proceedings in which the recordings were used – “therefore, the applicant could not control, based on his own arguments, the legality and the necessity of the recordings, nor could he require the balancing of the interests of justice with his right to respect for private life and correspondence” (para. 52, my translation).

Considering the only way the applicant could have challenged the legality of the interceptions was during a criminal trial against himself or against his client, the Court concluded that “the accessibility of the remedy for the applicant must be considered uncertain” (para. 54, my translation).

As regards a civil action to request for damages (which was indicated by the Government as an alternative), the Court stated that “the Government did not provide any example of case-law which would prove the effectiveness of this particular remedy. In addition, a complaint in front of the civil judge regarding the pecuniary liability of the state does not have the nature to allow the control of the legality of the recordings and to lead, where appropriate, to a decision to order their destruction – a result sought by the applicant -, so as it cannot be seen as an effective control for the purposes of Article 8” (para. 55, my translation).

The applicant received 4.500 EUR as non-pecuniary damage.

Trouble with Science’s special issue on privacy is that it’s called “The End of Privacy”

scienceThe prestigious Science magazine’s issue released today is dedicated to Privacy. The only problem is that it’s title is “The End of Privacy”. This statement is too dramatic. I don’t think we are facing the end of privacy, but the explosion of privacy invading technologies and practices.

Privacy as an inherent human value cannot disappear.

Privacy as the web of legal protection is not likely to disappear soon. Au contraire. It is likely it will be developed and taken more and more seriously.

The fact remains that privacy is under siege. But if scientific magazines are starting to publish entire issues on this topic, it would be more useful if they would not declare privacy dead, but figure out ways to construe a stronger web (technical, legal or whatever else nature) of protecting privacy.

Never-mind the title. Beyond it, there are some interesting articles:

1) Privacy and human behavior in the age of information, by Alessandro Acquisti, Laura Brandimarte and George Loewenstein.

2) Could your pacemaker be hackable?, by Daniel Clery (Medical devices connected to the Internet are vulnerable to sabotage or data theft).

3) Hiding in plain sight, by Jia You. (Software lets you use location-based apps without revealing where you are).

4) Control use of data to protect privacy, by Susan Landau (“..But notice, designated as a fundamental privacy principle in a different era, makes little sense in situations where collection consists of lots and lots of small amounts of information, whereas consent is no longer realistic, given the complexity and number of decisions that must be made. Thus, efforts to protect privacy by controlling use of data are gaining more attention…”)

While at it, also check my CPDP 2013 paper (presented two years ago at the conference in Brussels and published that year in a Springer volume edited by the organisers of the conference), Forgetting about consent. Why the focus should be on suitable safeguards in data protection law.

In conclusion, no, this is not the end of privacy. This is just the middle of a very, very difficult fight to protect privacy.

Main points from FTC’s Internet of Things Report

FTC published on 27 January a Report on the Internet of Things, based on the conclusions of a workshop organised in November with representatives of industry, consumers and academia.

It is apparent from the Report that the most important issue to be tackled by  the industry is data security – it represents also the most important risk to consumers.

While data security enjoys the most attention in the Report and the bigger part of the recommendations for best practices, data minimisation and notice and choice are considered to remain relevant and important in the IoT environment. FTC even provides a list of practical options for the industry to provide notice and choice, admitting that there is no one-size-fits-all solution.

The most welcomed recommendation in the report (at least, by this particular reader) was the one referring to the need of general data security and data privacy legislation – and not such legislation especially tailored for IoT. FTC called the Congress to act on these two topics.

Here is a brief summary of the Report:

The IoT definition from FTC’s point of view

Everyone in the field knows there is no generally accepted definition of what IoT is. It is therefore helpful to know what FTC considers IoT to be for its own activity:

“things” such as devices or sensors – other than computers, smartphones, or tablets – that connect, communicate or transmit information with or between each other through the Internet.

In addition, FTC clarified that, consistent with their mission to protect consumers in the commercial sphere, their discussion of IoT is limited to such devices that are sold to or used by consumers.

Stunning facts and numbers

  • as of this year, there will be 25 billion connected devices worldwide;
  • fewer than 10,000 households using one company’s IoT home automation product can “generate 150 million discrete data points a day” or approximately one data point every six seconds for each household.

Data security, the elephant in the house

Most of the recommendations for best practices that FTC made are about ensuring data security. According to the Report, companies:

  • should implement “security by design” by building security into their devices at the outset, rather than as an afterthought;
  • must ensure that their personnel practices promote good security; as part of their personnel practices, companies should ensure that product security is addressed at the appropriate level of responsibility within the organization;
  • must work to ensure that they retain service providers that are capable of maintaining reasonable security, and provide reasonable oversight to ensure that those service providers do so;
  • should implement a defense-in-depth approach, where security measures are considered at several levels; (…) FTC staff encourages companies to take additional steps to secure information passed over consumers’ home networks;
  • should consider implementing reasonable access control measures to limit the ability of an unauthorized person to access a consumer’s device, data, or even the consumer’s network;
  • should continue to monitor products throughout the life cycle and, to the extent feasible, patch known vulnerabilities.

Attention to de-identification! 

In the IoT ecosystem, data minimization is challenging, but it remains important.

  • Companies should examine their data practices and business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data.
  • To the extent that companies decide they need to collect and maintain data to satisfy a business purpose, they should also consider whether they can do so while maintaining data in deidentified form.

When a company states that it maintains de-identified or anonymous data, the Commission has stated that companies should

  1. take reasonable steps to de-identify the data, including by keeping up with technological developments;
  2. publicly commit not to re-identify the data; and
  3. have enforceable contracts in place with any third parties with whom they share the data, requiring the third parties to commit not to re-identify the data.

Notice and choice – difficult in practice, but still relevant

While the traditional methods of providing consumers with disclosures and choices may need to be modified as new business models continue to emerge, (FTC) staff believes that providing notice and choice remains important, as potential privacy and security risks may be heightened due to the pervasiveness of data collection inherent in the IoT. Notice and choice is particularly important when sensitive data is collected.

  • Staff believes that providing consumers with the ability to make informed choices remains practicable in the IoT;
  • Staff acknowledges the practical difficulty of providing choice when there is no consumer interface, and recognizes that there is no one-size-fits-all approach. Some options are enumerated in the report – several of which were discussed by workshop participants: choices at point of sale, tutorials, codes on the device, choices during set-up.

No need for IoT specific legislation, but general data security and data privacy legislation much needed

  • Staff does not believe that the privacy and security risks, though real, need to be addressed through IoT-specific legislation at this time;
  • However, while IoT specific-legislation is not needed, the workshop provided further evidence that Congress should enact general data security legislation;
  • General technology-neutral data security legislation should protect against unauthorized access to both personal information and device functionality itself;
  • General privacy legislation that provides for greater transparency and choices could help both consumers and businesses by promoting trust in the burgeoning IoT marketplace; In addition, as demonstrated at the workshop, general privacy legislation could ensure that consumers’ data is protected, regardless of who is asking for it.

“The EU-US interface: Is it possible?” CPDP2015 panel. Recommendation and some thoughts

The organizers of CPDP 2015 made available on their youtube channel some of the panels from this year’s conference, which happened last week in Brussels. This is a wonderful gift for people who weren’t able to attend CPDP this year (like myself). So a big thank you for that!

While all of them seem interesting, I especially recommend the “EU-US interface: Is it possible?” panel. My bet is that the EU privacy legal regime/US privacy legal regime dichotomy and the debates surrounding it will set the framework of “tomorrow”‘s global protection of private life.

Exactly one year ago I wrote a 4 page research proposal for a post-doc position with the title “Finding Neverland: The common ground of the legal systems of privacy protection in the European Union and the United States”. A very brave idea, to say the least, in a general scholarly environment which still widely accepts  Whitman’s liberty vs dignity solution as a fundamental “rift” between the American and European privacy cultures.

The idea I wanted to develop is to stop looking at what seems to be fundamental differences and start searching a common ground from which to build new understandings of protecting private life  accepted by both systems.

While it is true that, for instance, a socket in Europe is not the same as a socket in the US (as a traveller between the two continents I am well aware of that), fundamental human values do not change while crossing the ocean. Ultimately, I can convert the socket into metaphor and say that even if the continents use two very different sockets, the function of those sockets is the same – they are a means to provide energy so that one’s electronic equipment works. So which is this “energy” of the legal regime that protects private life in Europe and in the US?

My hunch is that this common ground is “free will”, and I have a bit of Hegel’s philosophy to back this idea. My research proposal was rejected (in fact, by the institute which, one year later, organized this panel at CPDP 2015 on the EU-US interface in privacy law). But, who knows? One day I may be able to pursue this idea and make it useful somehow for regulators that will have to find this common ground in the end.

You will discover in this panel some interesting ideas. Margot Kaminski (The Ohio State University Moritz College of Law) brings up the fact that free speech is not absolute in the US constitutional system – “copyright protection can win over the first amendment” she says. This argument is important in the free speech vs privacy debate in the US, because it shows that free speech is not “unbeatable”. It could be a starting point, among others, in finding some common ground.

Pierluigi Perri (University of Milan) and David Thaw (University of Pittsburgh) seem to be the ones that focus the most on the common grounds of the two legal regimes. They say that, even if it seems that one system is more preoccupied with state intrusions in private life and the other with corporate intrusions, both systems share a “feared outcome – the chilling effect on action and speech” of these intrusions. They propose a “supervised market based regulation” model.

Dennis Hirsch (Capital University Law School) speaks about the need of global privacy rules or something approximating them, “because data moves so dynamically in so many different ways today and it does not respect borders”. (I happen to agree with this statement – more details, here). Dennis argues in favour of sector co-regulation, that is regulation by government and industry, to be applied in each sector.

Other contributions are made by Joris van Hoboken, University of Amsterdam/New York University (NL/US) and Eduardo Ustaran, Hogan Lovells International (UK).

The panel is chaired by Frederik Zuiderveen Borgesius, University of Amsterdam  and organised by Information Society Project at Yale Law School.

Enjoy!

CJEU: CCTV camera in family home falls under the Data protection directive, but it is in principle lawful

CJEU gave its decision today in Case C-212/13 František Ryneš – under the preliminary ruling procedure. The press release is available here and the decision here.

Facts

A person who broke the window of the applicant’s home and was identified by the police with the help of the applicant’s CCTV camera complained that the footage was in breach of data protection law, as he did not give consent for that processing operation. The Data Protection Authority fined the applicant, and the applicant challenged the DPAs decision in front of an administrative court. The administrative court sent a question for a preliminary ruling to the CJEU.

Video image is personal data

First, the Court established that “the image of a person recorded by a camera constitutes personal data because it makes it possible to identify the person concerned” (para. 22).

In addition, video surveillance involving the recording and storage of personal data falls within the scope of the Directive, since it constitutes automatic data processing.

Household exception must be “narrowly construed”

According to the Court, as far as the provisions of the Data protection directive govern the processing of personal data liable to infringe fundamental freedoms, they “must necessarily be interpreted in the light of the fundamental rights set out in the Charter (see Google Spain and Google, EU:C:2014:317, paragraph 68)”, and “the exception provided for in the second indent of Article 3(2) of that directive must be narrowly construed” (para. 29).

In this sense, the Court emphasized the use of the word “purely” in the legal provision for describing the personal or household activity under this exception (para. 30).

Such processing operation is most likely lawful

In one of the last paragraphs of the decision, the Court clarifies that “the application of Directive 95/46 makes it possible, where appropriate, to take into account — in accordance, in particular, with Articles 7(f), 11(2), and 13(1)(d) and (g) of that directive — legitimate interests pursued by the controller, such as the protection of the property, health and life of his family and himself, as in the case in the main proceedings” (para. 34).

This practically means that, even if the household exception does not apply in this case, and the processing operation must comply with the requirements of the Data protection directive, these requirements imply that a CCTV camera recording activity such as the one in the proceedings is lawful.

NB: The Court used a non-typical terminology in this decision – “the right to privacy” (para. 29)

What Happens in the Cloud Stays in the Cloud, or Why the Cloud’s Architecture Should Be Transformed in ‘Virtual Territorial Scope’

This is the paper I presented at the Harvard Institute for Global Law and Policy 5th Conference, on June 3-4, 2013. I decided to make it available open access on SSRN. I hope you will enjoy it and I will be very pleased if any of the readers would provide comments and ideas. The main argument of the paper is that we need global solutions for regulating cloud computing. It begins with a theoretical overview on global governance, internet governance and territorial scope of laws, and it ends with three probable solutions for global rules envisaging the cloud. Among them, I propose the creation of a “Lex Nubia” (those of you who know Latin will know why😉 ).  My main concern, of course, is related to privacy and data protection in the cloud, but that is not the sole concern I deal with in the paper.

Abstract:

The most common used adjective for cloud computing is “ubiquitous”. This characteristic poses great challenges for law, which might find itself in the need to revise its fundamentals. Regulating a “model” of “ubiquitous network access” which relates to “a shared pool of computing resources” (the NIST definition of cloud computing) is perhaps the most challenging task for regulators worldwide since the appearance of the computer, both procedurally and substantially. Procedurally, because it significantly challenges concepts such as “territorial scope of the law” – what need is there for a territorial scope of a law when regulating a structure which is designed to be “abstracted”, in the sense that nobody knows “where things physically reside” ? Substantially, because the legal implications in connection with cloud computing services are complex and cannot be encompassed by one single branch of law, such as data protection law or competition law. This paper contextualizes the idea of a global legal regime for providing cloud computing services, on one hand by referring to the wider context of global governance and, on the other hand, by pointing out several solutions for such a regime to emerge.

You can download the full text of the paper following this link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2409006