pdpEcho is kicking off 2017 with a brief catalogue of interesting recently published research that sets the tone for the new year.
First, Wolfie Christl and Sarah Spiekermann‘s report on “Networks of Control”, published last month, is a must read for anyone that wants to understand how the digital economy functions on the streams of data we all generate, while reflecting on the ethical implications of this economic model and proposing new models that would try keep the surveillance society afar. Second, a new report of the Global Commission of Internet Governance explores global governance gaps created by existing global governance structures developed in the analog age. Third, the American Academy of Sciences recently published a report with concrete proposals on how to reconcile the use of different public and private sources of data for government statistics with privacy and confidentiality. Last, a volume by Angela Daly that was recently published by Hart Publishing explores how EU competition law, sector specific regulation, data protection and human rights law could tackle concentrations of power for the benefit of users.
- “Networks of control. A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy”, by Wolfie Christl, Sarah Spiekermann [OPEN ACCESS]
“Around the same time as Apple introduced its first smartphone and Facebook reached 30 million users in 2007, online advertisers started to use individual-level data to profile and target users individually (Deighton and Johnson 2013, p. 45). Less than ten years later, ubiquitous and real-time corporate surveillance has become a “convenient byproduct of ordinary daily transactions and interactions” (De Zwart et al 2014, p. 746). We have entered a surveillance society as David Lyon foresaw it already in the early 1990s; a society in which the practices of “social sorting”, the permanent monitoring and classification of the whole population through information technology and software algorithms, have silently become an everyday reality” (p. 118).
One of the realities we need to take into account when assessing this phenomenon is that “Opting out of digital tracking becomes increasingly difficult. Individuals can hardly avoid consenting to data collection without opting out of much of modern life. In addition, persons who don’t participate in data collection, who don’t have social networking accounts or too thin credit reports, could be judged as “suspicious” and “too risky” in advance” (p. 129).
The authors of the report explain that the title “Networks of Control” is justified “by the fact that there is not one single corporate entity that by itself controls today’s data flows. Many companies co-operate at a large scale to complete their profiles about us through various networks they have built up” (p. 7). They also explain that they want to close a gap created by the fact that “the full degree and scale of personal data collection, use and – in particular – abuse has not been scrutinized closely enough”, despite the fact that “media and special interest groups are aware of these developments for a while now” (p. 7).
What I found valuable in the approach of the study is that it also brings forward a topic that is rarely discussed when analysing Big Data, digital tracking and so on: the attempt of such practices to change behaviour at scale. “Data richness is increasingly used to correct us or incentivize us to correct ourselves. It is used to “nudge” us to act differently. As a result of this continued nudging, influencing and incentivation, our autonomy suffers (p. 7)”.
A chapter authored by Professor Sarah Spiekermann explores the ethical implications of the networks of control. She applies three ethical normative theories to personal data markets: “The Utilitarian calculus, which is the original philosophy underlying modern economics (Mill 1863/1987). The Kantian duty perspective, which has been a cornerstone for what we historically call “The Enlightenment” (Kant 1784/2009), and finally Virtue Ethics, an approach to life that originates in Aristotle’s thinking about human flourishing and has seen considerable revival over the past 30 years (MacIntyre 1984)” (p. 131).
Methodologically, the report is based on “a systematic literature review and analysis of hundreds of documents and builds on previous research by scholars in various disciplines such as computer science, information technology, data security, economics, marketing, law, media studies, sociology and surveillance studies” (p. 10).
2. Global Commission on Internet Governance “Corporate Accountability for a Free and Open Internet”, by Rebecca MacKinnon, Nathalie Maréchal and Priya Kumar [OPEN ACCESS]
The report shows that “as of July 2016, more than 3.4 billion people were estimated to have joined the global population of Internet users, a population with fastest one-year growth in India (a stunning 30 percent) followed by strong double digit growth in an assortment of countries across Africa (Internet Live Stats 2016a; 2016b)” (p. 1).
“Yet the world’s newest users have less freedom to speak their minds, gain access to information or organize around civil, political and religious interests than those who first logged on to the Internet five years ago” (p. 1).
Within this framework, the report explores the fact that “ICT sector companies have played a prominent role in Internet governance organizations, mechanisms and processes over the past two decades. Companies in other sectors also play an expanding role in global governance. Multinational companies wield more power than many governments over not only digital information flows but also the global flow of goods, services and labour: onethird of world trade is between corporations, and another third is intra-firm, between subsidiaries of the same multinational enterprise” (p. 5).
The authors also look at the tensions between governments and global companies with regard to requests for access to data, to weaken encryption and facilitate censorship in ways that contravene international human rights standards.
3. “Innovations in Federal Statistics: Combining Data Sources While Protecting Privacy”, by National Academy of Sciences [OPEN ACCESS].
The tension between privacy on one hand and statistical data and censuses on the other hand compelled the German Constitutional Court to create in the ’80s “the right to informational self-determination”. Could statistics bring a significant reform of such sort to the US? Never say never.
According to epic.org, the US National Academy of Sciences recently published a report that examines how disparate federal data sources can be used for policy research while protecting privacy.
The study shows that in the decentralised US statistical system, there are 13 agencies whose mission is primarily the creation and dissemination of statistics and more than 100 agencies who engage in statistical activities. There is a need for stronger coordination and collaboration to enable access to and evaluation of administrative and private-sector data sources for federal statistics. For this purpose, the report advices that “a new entity or an existing entity should be designated to facilitate secure access to data for statistical purposes to enhance the quality of federal statistics. Privacy protections would have to be fundamental to the mission of this entity“. Moreover, “the data for which it has responsibility would need to have legal protections for confidentiality and be protected using the strongest privacy protocols offered to personally identifiable information while permitting statistical use”.
One of the conclusions of the report is that “Federal statistical agencies should adopt modern database, cryptography, privacy-preserving and privacy-enhancing technologies”.
4. Private Power, Online Information Flows and EU Law. Mind The Gap, by Angela Daly, Hart Publishing [50 pounds]
Using a series of illustrative case studies, of Internet provision, search, mobile devices and app stores, and the cloud, the work demonstrates the gaps that currently exist in EU law and regulation. It is argued that these gaps exist due, in part, to current overarching trends guiding the regulation of economic power, namely neoliberalism, by which only the situation of market failure can invite ex ante rules, buoyed by the lobbying of regulators and legislators by those in possession of such economic power to achieve outcomes which favour their businesses.
Given this systemic, and extra-legal, nature of the reasons as to why the gaps exist, solutions from outside the system are proposed at the end of each case study. This study will appeal to EU competition lawyers and media lawyers.”
A look at political psychological targeting, EU data protection law and the US elections
Cambridge Analytica, a company that uses “data modeling and psychographic profiling” (according to its website), is credited with having decisively contributed to the outcome of the presidential election in the U.S.. They did so by using “a hyper-targeted psychological approach” allowing them to see trends among voters that no one else saw and thus to model the speech of the candidate to resonate with those trends. According to Mashable, the same company also assisted the Leave. EU campaign that leaded to Brexit.
How do they do it?
“We collect up to 5,000 data points on over 220 million Americans, and use more than 100 data variables to model target audience groups and predict the behavior of like-minded people” (my emphasis), states their website (for comparison, the US has a 324 million population). They further explain that “when you go beneath the surface and learn what people really care about you can create fully integrated engagement strategies that connect with every person at the individual level” (my emphasis).
According to Mashable, the company “uses a psychological approach to polling, harvesting billions of data from social media, credit card histories, voting records, consumer data, purchase history, supermarket loyalty schemes, phone calls, field operatives, Facebook surveys and TV watching habits“. This data “is bought or licensed from brokers or sourced from social media”.
(For a person who dedicated their professional life to personal data protection this sounds chilling.)
Legal implications
Under US privacy law this kind of practice seems to have no legal implications, as it doesn’t involve processing by any authority of the state, it’s not a matter of consumer protection and it doesn’t seem to fall, prima facie, under any piece of the piecemeal legislation dealing with personal data in the U.S. (please correct me if I’m wrong).
Under EU data protection law, this practice would raise a series of serious questions (see below), without even getting into the debate of whether this sort of intimate profiling would also breach the right to private life as protected by Article 7 of the EU Charter of Fundamental Rights and Article 8 of the European Convention of Human Rights (the right to personal data protection and the right to private life are protected separately in the EU legal order). Put it simple, the right to data protection enshrines the “rules of the road” (safeguards) for data that is being processed on a lawful ground, while the right to private life protects the inner private sphere of a person altogether, meaning that it can prohibit the unjustified interferences in the person’s private life. This post will only look at mass psychological profiling from the data protection perspective.
Does EU data protection law apply to the political profilers targeting US voters?
But why would EU data protection law even be applicable to a company creating profiles of 220 million Americans? Surprisingly, EU data protection law could indeed be relevant in this case, if it turns out that the company carrying out the profiling is based in the UK (London-based), as several websites claim in their articles (here, here and here).
Under Article 4(1)(a) of Directive 95/46, the national provisions adopted pursuant to the directive shall apply “where the processing is carried out in the context of the activities of an establishment of the controller on the territory of the Member State“. Therefore, the territorial application of Directive 95/46 is triggered by the place of establishment of the controller. Moreover, Recital 18 of the Directive’s Preamble explains that “in order to ensure that individuals are not deprived of the protection to which they are entitled under this Directive, any processing of personal data in the Community (EU – n.) must be carried out in accordance with the law of one of the Member States” and that “in this connection, processing carried out under the responsibility of a controller who is established in a Member State should be governed by the law of that State” (see also CJEU Case C-230/14 Weltimmo, paras. 24, 25, 26).
There are, therefore, no exceptions to applying EU data protection rules to any processing of personal data that is carried out under the responsibility of a controller established in a Member State. Is it relevant here whether the data subjects are not European citizens, and whether they would not even be physically located within Europe? The answer is probably in the negative. Directive 95/46 provides that the data subjects it protects are “identified or identifiable natural persons“, without differentiating them based on their nationality. Neither does the Directive link its application to any territorial factor concerning the data subjects. Moreover, according to Article 8 of the EU Charter of Fundamental Rights, “everyone has the right to the protection of personal data concerning him or her”.
I must emphasise here that the Court of Justice of the EU is the only authority that can interpret EU law in a binding manner and that until the Court decides how to interpret EU law in a specific case, we can only engage in argumentative exercises. If the interpretation proposed above would be found to have some merit, it would indeed be somewhat ironic to have the data of 220 million Americans protected by EU data protection rules.
What safeguards do persons have against psychological profiling for political purposes?
This kind of psychological profiling for political purposes would raise a number of serious questions. First of all, there is the question of whether this processing operation involves processing of “special categories of data”. According to Article 8(1) of Directive 95/46, “Member States shall prohibit the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and the processing of data concerning health or sex life.” There are several exceptions to this prohibition, of which only two would conceivably be applicable to this kind of profiling:
In order for this kind of psychological profiling to be lawful, the controller must obtain explicit consent to process all the points of data used for every person profiled. Or the controller must only use those data points that were manifestly made public by a person.
Moreover, under Article 15(1) of Directive 95/46, the person has the right “not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc.”. It is of course to be interpreted to what extent psychological profiling for political purposes produces legal effects or significantly affects the person.
Another problem concerns the obligation of the controller to inform every person concerned that this kind of profiling is taking place (Articles 10 and 11 of Directive 95/46) and to give them details about the identity of the controller, the purposes of the processing and all the personal data that is being processed. In addition, the person should be informed that he or she has the right to ask for a copy of the data the controller holds about him or her and the right to ask for the erasure of that data if it was processed unlawfully (Article 12 of Directive 95/46).
Significantly, the person has the right to opt-out of a processing operation, at any time, without giving reasons, if that data is being processed for the purposes of direct marketing (Article 14(b) of Directive 95/46). For instance, in the UK, the supervisory authority – the Information Commissioner’s Office, issued Guidance for political campaigns in 2014 and gave the example of “a telephone call which seeks an individual’s opinions in order to use that data to identify those people likely to support the political party or referendum campaign at a future date in order to target them with marketing” as constituting direct marketing.
Some thoughts
***
Find what you’re reading useful? Consider supporting pdpecho.
Share this:
2 Comments
Posted in Comments, Europe, News, US and Canada
Tagged big data, big data analytics, cambridge analytica, data protection, data protection and elections, directive 95/46/EC, personal scope of EU charter, personal scope of EU data protection, privacy, profiling, profiling for electoral campaign, US elections