Category Archives: DP Fundamentals

A US Bill from 1974 shares so much DNA with the GDPR, it could be its ancestor

America’s own GDPR was introduced in Congress in 1974. This Bill applied to government and companies, it restricted international transfers and offered U.S. and foreign “data subjects” rights to access, erasure and even… explanation.

The U.S. has been recently working towards finally adopting comprehensive privacy and data protection rules, with unfolding efforts both at federal and state level. Until now, only Californians can claim they actually achieved something on the road to protecting their rights impacted by the widespread collection and use of personal information. Other serious efforts are undergoing in Washington State, but they may end up being undermined by good intentions.

These developments are possible right now due to a combination of EU’s General Data Protection Regulation’s (GDPR) global reach and notoriety, the countless privacy scandals affecting Americans, and the absence of comprehensive statutory legal protections in the U.S. of privacy and other individual rights that may be affected by the collection and use of personal information.

But did you know this is not the first time the U.S. is having privacy law fever? In the late ’60s and early ’70s, American lawmakers were concerned about the rise of automated data processing and computerized databases. Serious efforts were put into analyzing how the rights of the American people could be protected against misuses and abuses of personal information. The Fair Credit Reporting Act was adopted in 1970. An influential Report was published in 1973 by the Department of Health, Education and Welfare (HEW) proposing a set of Fair Information Practice Principles built on an impressive, meticulous analysis (read it if you haven’t done so yet; bonus: it’s peppered with smart literary mottos in between chapters). The Report called for comprehensive federal privacy legislation applicable both to government and companies.

About six months after the publication of the HEW Report, in January 1974, Bill S.3418 was introduced in the US Senate by three Senators — Ervin, Percy and Muskie, ‘to establish a Federal Privacy Board, to oversee the gathering and disclosure of information concerning individuals, and to provide management systems in all Federal agencies, State and local governments, and other organizations’.

This Bill was clearly ahead of its time and aged astoundingly well, especially when compared to some of the key characteristics of the GDPR — the current global golden standard for comprehensive data protection law:

It applied to both public and private sectors, at federal and state level

The Bill had a very broad scope of application. It covered the activity of “organizations” defined as any Federal agencies; the government of the District of Columbia; any authority of any State, local government, or other jurisdiction; any public or private entity engaged in business for profit. It only exempted from its rules information systems pertaining to Federal agencies that were vital to national defense, as well as criminal investigatory files of Federal, State or local law enforcement and any information maintained by the press or news media, except for information related to their employees.

It created a Federal Privacy Board to oversee its application

The Federal Privacy Board would have been created as part of the Executive branch, composed of five members appointed by the President with the approval of the Senate, for a three year mandate. The Board would have been granted effective powers to investigate violations of the law — including by being granted admission to the premises where any information system or computers are kept, to recommend either criminal or civil penalties, and to actually order any organization found in breach of the law ’to cease and desist such violation’.

It equally protected the rights of Americans and foreigners as data subjects

It’s quite difficult to believe it (especially in the context of the endless Transatlantic debates that ultimately lead to the Judicial Redress Act), but this Bill explicitly protected “any data subject of a foreign nationality, whether residing in the United States or not” by requiring organizations to afford them “the same rights under this Act as are afforded to citizens in the United States”. Such a broad personal scope has been a characteristic of the European data protection law framework even before the GDPR. It also made possible the legal challenges brought in the UK against Cambridge Analytica by David Caroll, a U.S. citizen residing in New York.

It provided restrictions for international data transfers to jurisdictions which did not apply the protections enshrined in the Bill

Under this Bill, organizations were required to “transfer no personal information beyond the jurisdiction of the United States without specific authorization from the data subject or pursuant to a treaty or executive agreement in force guaranteeing that any foreign government or organization receiving personal information will comply with the applicable provisions of this Act with respect to such information”. The idea of restricting transfers of personal data to countries which do not ensure a similar level of protection is a staple of the EU data protection law regime and the source of some of the biggest EU-US tensions related to tech and data governance.

It provided for rights of access to, correction, “purging” of personal information. And for notification of purging to former recipients!

The Bill provided for an extensive right of access to one’s own personal information. It required organizations to grant data subjects “the right to inspect, in a form comprehensible” all personal information related to them, the nature of the sources of the information and the recipients of the personal information. In addition, it also granted individuals the right to challenge and correct information. As part of this right to challenge and correct information, the Bill even provided for a kind of “right to be forgotten”, since it asked organizations to “purge any such information that is found to be incomplete, inaccurate, not pertinent, not timely nor necessary to be retained, or can no longer be verified”. Moreover, the Bill also required organizations to “furnish to past recipients of such information notification that the item has been purged or corrected” at the request of the data subject.

It provided for transparency rights into statistical models and receiving some explanation

The same provision granting a right to challenge and correct personal information referred also to individuals wishing “to explain” information about them in information systems, but it is not clear how organizations should have particularly responded to explanation requests. Elsewhere in the Bill, organizations “maintaining an information system that disseminates statistical reports or research findings based on personal information drawn from the system, or from systems of other organizations” were required to “make available to any data subject (without revealing trade secrets) methodology and materials necessary to validate statistical analyses” (!). Moreover, those organizations were also asked not to make information available for independent analysis “without guarantees that no personal information will be used in a way that might prejudice judgments about any data subject”.

It provided some rules even for collection of personal information

One of the key questions to ask about data protection legislation generally is whether it intervenes at the time of collection of personal data, as opposed to merely regulating its use. This Bill cared about collection too. It provided that organizations must “collect, maintain, use and disseminate only personal information necessary to accomplish a proper purpose of the organization”, “collect information to the greatest extent possible from the data subject directly” and even “collect no personal information concerning the political or religious beliefs, affiliations, and activities of data subjects which is maintained, used or disseminated in or by any information system operated by any governmental agency, unless authorized by law”.

There are other remarkable features of this Bill that remind of features of the GDPR, such as broad definitions of personal information and data subjects (“an individual about whom personal information is indexed or may be located under his name, personal number, or other identifiable particulars, in an information system”) and show sophisticated thinking about managing the impact automated processing of personal data might have on the rights of individuals. Enforcement of the Bill included criminal and civil penalties applied with the help of the U.S. Attorney General and the Federal Privacy Board, as well as a private right of action limited only to breaches of the right to access personal information.

So what happened to it? Throughout the legislative process in Congress, this Bill was almost completely rewritten and it ultimately became the US Privacy Act 1974 — a privacy law quite limited in scope (applicable only to Federal agencies) and ambitions compared to the initial proposal. The answer about what might have happened during this process to fundamentally rewrite the Bill is somewhere in these 1466 pages recording the debates around the US Privacy Act of 1974.

Be it a failed attempt to provide comprehensive data protection and privacy legislation in the U.S., it nonetheless shows how much common thinking is shared by Europe and America. At the same time this Bill was introduced in the U.S. Senate, Europe was having its own data protection law fever, with many legislative proposals being discussed in Western Europe after the first data protection law was adopted in 1970 in the German land of Hesse. But according to Frits Hondius, a Dutch scholar documenting these efforts in his volume “Emerging Data Protection in Europe” published in 1975:

“A factor of considerable influence was the development of data protection on the American scene. Almost every issue that arose in Europe was also an issue in the United States, but at an earlier time and on a more dramatic scale. (…) The writings by American authors about privacy and computers (e.g. Westin and Miller), the 1966 congressional hearings, and the examples set by federal and state legislation, such as the US Fair Credit Reporting Act 1970 and the US Privacy Act 1974, have made a deep impact on data protection legislation in Europe.”

After a shared start in the late ‘60s and early ‘70s, the two privacy and data protection law regimes evolved significantly different. Almost half a century later, it seems to be Europe’s turn to impact the data protection and privacy law debate in the U.S..

Brief case-law companion for the GDPR professional

This collection of quotes from relevant case-law has been compiled with the purpose of being useful to all those working with EU data protection law. The majority of the selected findings are part of a “Countdown to the GDPR” I conducted on social media, one month before the Regulation became applicable, under #KnowYourCaseLaw. This exercise was prompted by a couple of reasons.

First, data protection in the EU is much older and wider than the General Data Protection Regulation (GDPR) and it has already invited the highest Courts in Europe to weigh in on the protection of this right. Knowing what those Courts have said is essential.

Data protection law in the EU is not only a matter of pure EU law, but also a matter of protecting human rights following the legal framework of the Council of Europe (starting with Article 8 of the European Convention on Human Rights – ‘ECHR’). The interplay between these two legal regimes is very important, given the fact that the EU recognizes fundamental rights protected by the ECHR as general principles of EU law – see Article 6(3) TEU.

Finally, knowing relevant case-law makes the difference between a good privacy professional and a great one.

What to expect

This is not a comprehensive collection of case-law and it does not provide background for the cases it addresses. The Handbook of data protection law, edition 2018, is a great resource if this is what you are looking for.

This is a collection of specific findings of the Court of Justice of the EU (CJEU), the European Court of Human Rights (ECtHR) and one bonus finding of the German Constitutional Court. There are certainly other interesting findings that have not been included here (how about an “Encyclopedia of interesting findings” for the next project?). The ones that have been included provide insight into specific issues, such as the definition of personal data, what constitutes data related to health, what does freely consent mean or what type of interference with fundamental rights is profiling. Readers will even find a quote from a concurring opinion of an ECtHR judge that is prescient, to say the least.

Enjoy the read!

Brief Case-Law Companion for the GDPR Professional

Here’s how Internet’s inventor wants to reinvent it and why this is great news for privacy

Last May I had the chance to meet Prof. Tim Berners-Lee and one of the lead researchers in his team at MIT, Andrei Sambra, when I accompanied Giovanni Buttarelli, the European Data Protection Supervisor, in his visit at MIT.

Andrei presented then the SOLID project, and we had the opportunity to discuss about it with Prof. Berners-Lee, who leads the work for SOLID. The project “aims to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy.” In other words, the researchers want to de-centralise the Internet.

“Solid (derived from “social linked data”) is a proposed set of conventions and tools for building decentralized social applications based on Linked Data principles. Solid is modular and extensible and it relies as much as possible on existing W3C standards and protocols”, as explained on the project’s website.

Andrei explains in a blog post that, in a first step, the project finds solutions “to decouple the applications from the data they produce, and then to decouple the data from the actual storage server.”

“This means that applications and servers are interchangeable, and they can be swapped without impacting the most important part – your data. It’s all about freedom of choice.” (Read the entire explanation in this blog post)

I was so excited to find out about the efforts conducted by Prof. Berners-Lee and his team. At the end of the presentation and the discussion, I asked, just to make sure I understood it correctly: “Are you trying to reinvent the Internet?”. And Prof. Berners-Lee replied, simply: “Yes”. A couple of weeks later I saw this article in the New York Times: “The Web’s creator looks to reinvent it” So I did understand correctly 🙂

But why was I so excited? Because I saw first hand that some of the greatest minds in the world are working to bring back control to the individual on the Internet. Some of the greatest minds in the world are not giving up on privacy, irrespective of how many “Privacy is dead” books and articles are published, irrespective of how public and private policymakers, lobbyists and Courts understand at this moment in history the value of privacy and of what Andrei called “freedom of choice” in the digital world.

I was excited because I found out about a common goal us, the legal privacy bookworms/occasional policymakers, and the IT masterminds have: empower the ‘data subject’, the ‘user’, well, the human being, in the new Digital Age, put them back in control and curtail unnecessary invasions of privacy for all kind of purposes (profit making to security).

In fact, my entire PhD thesis was built on the assumption that the rights of the data subject, as they are provided in EU law (rights to access, to erase, to object, to be informed, to oppose automated decision making) are all prerogatives of the individual that aim to give control to the individual over his or her data. So if technical solutions are developed for this kind of control to be practical and effective, I am indeed excited about it!

I also realised that some of the provisions that survived incredible, multifaceted opposition to make it to the new General Data Protection Regulation are in fact tenable, like the right to data portability (check out Article 20 of the GDPR, here).

This is why, when I saw that today the world celebrates 25 years since the Internet went public, I remembered this moment in May and I wanted to share it with you. Here’s to a decentralised Internet!

Later Edit: The man itself says August 23 is not exactly accurate. Nor 25 years! In any case, it was still a good day for me to think about all of the above and share it with you 🙂

IMG_7391

“The EU-US interface: Is it possible?” CPDP2015 panel. Recommendation and some thoughts

The organizers of CPDP 2015 made available on their youtube channel some of the panels from this year’s conference, which happened last week in Brussels. This is a wonderful gift for people who weren’t able to attend CPDP this year (like myself). So a big thank you for that!

While all of them seem interesting, I especially recommend the “EU-US interface: Is it possible?” panel. My bet is that the EU privacy legal regime/US privacy legal regime dichotomy and the debates surrounding it will set the framework of “tomorrow”‘s global protection of private life.

Exactly one year ago I wrote a 4 page research proposal for a post-doc position with the title “Finding Neverland: The common ground of the legal systems of privacy protection in the European Union and the United States”. A very brave idea, to say the least, in a general scholarly environment which still widely accepts  Whitman’s liberty vs dignity solution as a fundamental “rift” between the American and European privacy cultures.

The idea I wanted to develop is to stop looking at what seems to be fundamental differences and start searching a common ground from which to build new understandings of protecting private life  accepted by both systems.

While it is true that, for instance, a socket in Europe is not the same as a socket in the US (as a traveller between the two continents I am well aware of that), fundamental human values do not change while crossing the ocean. Ultimately, I can convert the socket into metaphor and say that even if the continents use two very different sockets, the function of those sockets is the same – they are a means to provide energy so that one’s electronic equipment works. So which is this “energy” of the legal regime that protects private life in Europe and in the US?

My hunch is that this common ground is “free will”, and I have a bit of Hegel’s philosophy to back this idea. My research proposal was rejected (in fact, by the institute which, one year later, organized this panel at CPDP 2015 on the EU-US interface in privacy law). But, who knows? One day I may be able to pursue this idea and make it useful somehow for regulators that will have to find this common ground in the end.

You will discover in this panel some interesting ideas. Margot Kaminski (The Ohio State University Moritz College of Law) brings up the fact that free speech is not absolute in the US constitutional system – “copyright protection can win over the first amendment” she says. This argument is important in the free speech vs privacy debate in the US, because it shows that free speech is not “unbeatable”. It could be a starting point, among others, in finding some common ground.

Pierluigi Perri (University of Milan) and David Thaw (University of Pittsburgh) seem to be the ones that focus the most on the common grounds of the two legal regimes. They say that, even if it seems that one system is more preoccupied with state intrusions in private life and the other with corporate intrusions, both systems share a “feared outcome – the chilling effect on action and speech” of these intrusions. They propose a “supervised market based regulation” model.

Dennis Hirsch (Capital University Law School) speaks about the need of global privacy rules or something approximating them, “because data moves so dynamically in so many different ways today and it does not respect borders”. (I happen to agree with this statement – more details, here). Dennis argues in favour of sector co-regulation, that is regulation by government and industry, to be applied in each sector.

Other contributions are made by Joris van Hoboken, University of Amsterdam/New York University (NL/US) and Eduardo Ustaran, Hogan Lovells International (UK).

The panel is chaired by Frederik Zuiderveen Borgesius, University of Amsterdam  and organised by Information Society Project at Yale Law School.

Enjoy!

What Happens in the Cloud Stays in the Cloud, or Why the Cloud’s Architecture Should Be Transformed in ‘Virtual Territorial Scope’

This is the paper I presented at the Harvard Institute for Global Law and Policy 5th Conference, on June 3-4, 2013. I decided to make it available open access on SSRN. I hope you will enjoy it and I will be very pleased if any of the readers would provide comments and ideas. The main argument of the paper is that we need global solutions for regulating cloud computing. It begins with a theoretical overview on global governance, internet governance and territorial scope of laws, and it ends with three probable solutions for global rules envisaging the cloud. Among them, I propose the creation of a “Lex Nubia” (those of you who know Latin will know why 😉 ).  My main concern, of course, is related to privacy and data protection in the cloud, but that is not the sole concern I deal with in the paper.

Abstract:

The most common used adjective for cloud computing is “ubiquitous”. This characteristic poses great challenges for law, which might find itself in the need to revise its fundamentals. Regulating a “model” of “ubiquitous network access” which relates to “a shared pool of computing resources” (the NIST definition of cloud computing) is perhaps the most challenging task for regulators worldwide since the appearance of the computer, both procedurally and substantially. Procedurally, because it significantly challenges concepts such as “territorial scope of the law” – what need is there for a territorial scope of a law when regulating a structure which is designed to be “abstracted”, in the sense that nobody knows “where things physically reside” ? Substantially, because the legal implications in connection with cloud computing services are complex and cannot be encompassed by one single branch of law, such as data protection law or competition law. This paper contextualizes the idea of a global legal regime for providing cloud computing services, on one hand by referring to the wider context of global governance and, on the other hand, by pointing out several solutions for such a regime to emerge.

You can download the full text of the paper following this link: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2409006

“Purpose limitation”, explained by the Article 29 WP

On April 2, Article 29 WP published its Opinion on “purpose limitation”, one of the safeguards which make data protection efficient in Europe.

Purpose limitation protects data subjects by setting limits on how data controllers are able to use their data while also offering some degree of flexibility for data controllers. The concept of purpose limitation has two main building blocks: personal data must be collected for ‘specified, explicit and legitimate’ purposes (purpose specification) and not be ‘further processed in a way incompatible’ with those purposes (compatible use).

Further processing for a different purpose does not necessarily mean that it is incompatible:
compatibility needs to be assessed on a case-by-case basis. A substantive compatibility assessment requires an assessment of all relevant circumstances. In particular, account should be taken of the following key factors:

– the relationship between the purposes for which the personal data have been collected and the purposes of further processing;
– the context in which the personal data have been collected and the reasonable expectations of the data subjects as to their further use;
– the nature of the personal data and the impact of the further processing on the data subjects;
– the safeguards adopted by the controller to ensure fair processing and to prevent any undue impact on the data subjects.

Conclusions of the Opinion:

First building block: ‘specified, explicit and legitimate’ purposes

With regard to purpose specification, the WP29 highlights the following key considerations:

 Purposes must be specific. This means that – prior to, and in any event, no later than the time when the collection of personal data occurs – the purposes must be precisely and fully identified to determine what processing is and is not included within the specified purpose and to allow that compliance with the law can be assessed and data protection
safeguards can be applied.

 Purposes must be explicit, that is, clearly revealed, explained or expressed in some form in order to make sure that everyone concerned has the same unambiguous understanding of the purposes of the processing irrespective of any cultural or linguistic diversity. Purposes may be made explicit in different ways.

 There may be cases of serious shortcomings, for example where the controller fails to specify the purposes of the processing in sufficient detail or in a clear and unambiguous language, or where the specified purposes are misleading or do not correspond to reality. In any such situation, all the facts should be taken into account to determine the actual purposes, along with the common understanding and reasonable expectations of the data subjects based on the context of the case.

 Purposes must be legitimate. Legitimacy is a broad requirement, which goes beyond a simple cross-reference to one of the legal grounds for the processing referred to under Article 7 of the Directive. It also extends to other areas of law and must be interpreted within the context of the processing. Purpose specification under Article 6 and the requirement to have a lawful ground for processing under Article 7 of the Directive are two separate and cumulative requirements.

 If personal data are further processed for a different purpose
– the new purpose/s must be specified (Article 6(1)(b)), and
– it must be ensured that all data quality requirements (Articles 6(1)(a) to (e)) are also
satisfied for the new purposes.

Second building block: compatible use
 Article 6(1)(b) of the Directive also introduces the notions of ‘further processing’ and ‘incompatible’ use. It requires that further processing must not be incompatible with the purposes for which personal data were collected. The prohibition of incompatible use sets a limitation on further use. It requires that a distinction be made between further use that is ‘compatible’, and further use that is ‘incompatible’, and therefore, prohibited.

 By prohibiting incompatibility rather than requiring compatibility, the legislator seems to give some flexibility with regard to further use. Further processing for a different purpose does not necessarily and automatically mean that it is incompatible, as compatibility needs to be assessed on a case-by-case basis.

 In this context, the WP29 emphasises that the specific provision in Article 6(1)(b) of the Directive on ‘further processing for historical, statistical or scientific purposes’ should be seen as a specification of the general rule, while not excluding that other cases could also be considered as ‘not incompatible’. This leads to a more prominent role for different kinds of safeguards, including technical and organisational measures for functional separation, such as full or partial anonymisation, pseudonymisation, aggregation of data, and privacy enhancing technologies.

The Opinion is available HERE.

Going back to basics

Being in the process of writing my thesis, I have realized how important it is to stop from searching through the whirling flux of current information and new developments in the area of privacy and information technology, or more generally “law and technology”, and look back at the beginning of this craziness.

One might find answers for questions she didn’t even know she needed to answer. Or, at least, she might find some reassurance that the legal thought in this field is capable of steadiness and coherence.

This is why I decided to share with you the principles enshrined in the first “internationalization” effort of personal data protection that I know of, RESOLUTION (73) 22 ON THE PROTECTION OF THE PRIVACY OF INDIVIDUALS VIS-A-VIS ELECTRONIC DATA BANKS IN THE PRIVATE SECTOR (Adopted by the Committee of Ministers of the Council of Europe on 26 September 1973).

1.

The information stored should be accurate and should be kept up to date. In general, information relating to the intimate private life of persons or information which might lead to unfair discrimination should not be recorded or, if recorded, should not be disseminated.

2.

The information should be appropriate and relevant with regard to the purpose for which it has been stored.

3.

The information should not be obtained by fraudulent or unfair means.

4.

Rules should be laid down to specify the periods beyond which certain categories of information should no longer be kept or used.

5.

Without appropriate authorisation, information should not be used for purposes other than those for which it has been stored, nor communicated to third parties.

6.

As a general rule, the person concerned should have the right to know the information stored about him, the purpose for which it has been recorded, and particulars of each release of this information.

7.

Every care should be taken to correct inaccurate information and to erase obsolete information or information obtained in an unlawful way.

8.

Precautions should be taken against any abuse or misuse of information. Electronic data banks should be equipped with security systems which bar access to the data held by them to persons not entitled to obtain such information, and which provide for the detection of misdirections of information, whether intentional or not.

9.

Access to the information stored should be confined to persons who have a valid reason to know it. The operating staff of electronic data banks should be bound by rules of conduct aimed at preventing the misuse of data and, in particular, by rules of professional secrecy.

10.

Statistical data should be released only in aggregate form and in such a way that it is impossible to link the information to a particular person.

The original text of the Resolution can be found here.

We encounter access rights, purpose limitation, erasure of obsolete data and even the idea of anonymization. In 1973.

I got my ounce of inspiration from wondering how the essence of these principles are still relevant so many decades after they were published. And I hope you will also find yours.

   

DP fundamentals: Few facts on Information and Access

One of the concrete data protection rights individuals enjoy in Europe are the right to access data collected on them and the right to be informed about the processing of their data.

These rights are provided under Articles 10, 11 and 12 of the Directive 95/46. However, a great emphasis is made on Article 12, which contains both the right to access and the right to confirmation of undergoing processing of personal data by a certain processor or operator.

Prof. Christopher Kuner writes in one of his books that “The rights granted to data subjects under Article 12 can present substantial difficulties for companies. First, given the distributed nature of computing nowadays, personal data may be contained in a variety of databases located in different geographic regions, so that it can be difficult to locate all the data necessary to respond to a data subject’s request. Indeed locating all the data pertaining to a particular data subject in order to allow him to know what data are being held about him to assert his rights of erasure, blockage etc. may require the data controller to comb through masses of data contained in various databases, which in itself could lead to data protection risks”.

He also writes that another source of problems with complying with Art. 12 is that Member States have transposed differently this provision with regard to the costs of access and the number of times it can be exercised. “For instance, in Finland the data controller may charge its costs in accessing the data and requests by data subjects are limited at one per year, while in UK the controller may charge a fee of up to 10 pounds for access to each entry and reasonable time must elapse between requests. This disharmony of the law creates problems for data controllers that process data of data subjects from different Member States.”

Source: Christopher Kuner, European Data Privacy Law and Online Business, Oxford University Press, 2003 (p. 71, 72)

You can find the book here:

European Data Privacy Law and Online Business