It’s not perfect, but it could work: The Action Plan against disinformation in the 2019 European elections

More resources, better coordination between States and collaboration between digital platforms and media, to combat disinformation and ensure free and fair European elections.

Author: Pedro Peña, Attorney of the Spanish Parliament and Associate Professor IE Law School

Investigative journalist Lyudmila Savchuk, who three years ago worked as an undercover agent at the notorious rumor factory “Internet Research Agency,” based in St. Petersburg, doesn’t harbor any doubts about Russia’s interest in developing disinformation campaigns as the European Parliament elections draw near. Vice-President of the European Commission, Andrus Ansip, is not cryptic about his feelings either when denouncing these types of campaigns built to influence elections and public consultations that take place in different Member States, adding that the evidence collected always points in the same direction: Russia[1]. None of this is new, nor is it limited exclusively to Europe, as evidenced by special counsel Robert Mueller’s sober and rigorously documented indictment against a number of Russian entities, with charges of illegal interference in the 2016 US presidential elections.

But although there may be a lack of novelty, there is no shortage of relevance when it comes to this matter. The 2019 European elections will be held in a very special context: a tense and volatile international situation; geopolitical competition between blocs that don’t share the same basic principles; the unresolved institutional crisis that is Brexit; the complex kickoff of the Eurozone budget; pending reforms on migration; taxation or defense; and the strength of political forces openly hostile to the European project, all set the scene. And a twofold objective will preside: i) to obtain a large majority vote, because Europe needs the support of its citizens for the European project and to gain the legitimacy that only citizens’ participation in the elections can provide, and ii) that the elections are free, fair and transparent, so that citizens have access to reliable information about the political alternatives available to them, and are able to evaluate them through honest and constructive public debates, without deception or misrepresentation.

The EU must act with determination and strength, not only to ensure its institutions continue to operate correctly but also to preserve the principles that are its raison d’être: democracy, freedom and the rule of law.

Cyberattacks, illegitimate use of data and disinformation threaten European elections

From this perspective, the elections face three main threats: (a) cyber-attacks carried out by hackers to damage or destroy networks or computer systems, affecting both electoral processes and party and candidate campaigns[2]; (b) the unlawful use of personal information for electoral propaganda[3]; and (c) the dissemination of fake news through online platforms, media and social networks. Facing these threats, and at this crucial time in its future, the European Union must act with determination and strength, not only to ensure its institutions continue to operate correctly but also to preserve the principles that are its raison d’être: democracy, freedom and the rule of law.

It is true that the EU has already taken important steps in recent years to provide secure and resistant electoral processes, allowing for clean elections free from suspicion. Some of these steps include directives on cyber-attacks (from 2013 and 2016)[4]; the General Data Protection Regulation (2016)[5], which prevents and punishes unlawful use of personal information; and the Communication “Tackling online disinformation: a European approach” (2018)[6], in which the Commission proposes a series of actions to, among other things, achieve a more transparent and accountable digital ecosystem, promote media literacy and education and support quality journalism as an essential element of democratic societies. And of course, the 2015 East Strategic Communication Task Force, created to “address Russia’s ongoing disinformation campaigns,” can’t be left off the list.

An Action Plan with more political will than legal and conceptual realization

In view of the above, the Action Plan against Disinformation, released by the High Representative of the Union for Foreign Affairs and Security Policy and by the Commission on December 5th, 2018[7], has materialized as one more step in a consistent series of decisions, even though it is specific and endowed with a certain urgency, since it focuses on the upcoming European elections. The Plan essentially responds to the European Council’s call to adopt measures capable of “protecting the democratic systems of the Union and to combat disinformation”[8]. It is built upon community initiatives such as those mentioned above, and has the support and cooperation of key partners like the North Atlantic Treaty Organization (NATO) and the Group of Seven (G7), which its promoters happily acknowledge. Three main elements of the Plan should be briefly summarized:

  1. The first is political: the frank acknowledgment that democratic processes face the growing challenge of deliberate and systematic large-scale disinformation. It is no wonder then that 83% of Europeans think that fake news poses a threat to democracy[9], or that 73% of Internet users are concerned about online disinformation in electoral campaigns [10].
  2. The second is conceptual: the definition of “disinformation” as verifiably false or misleading information created, presented and disseminated for economic gain or to intentionally mislead the public, in a way which could cause social harm. In this regard, words and definitions matter greatly, not only because reaching a consensus on the terms which allow the community of professionals who analyze the phenomenon to be understood is essential, but also because it is necessary to clearly define the conduct to be avoided before defining the consequences of such conduct in regulatory terms.
  3. The third is legal: the actions included in the Plan are only applied to “disinformation” which, at least in many circumstances, is legal under EU or Member State law (whereas illegal content, such as hate speech, inciting terrorism or child pornography, is dealt with in other types of EU or national regulation)[11]. This means that the action against disinformation that the Plan lays out, and the behavior which is both voluntary and harmful to the formation of the people’s will, which it intends to fight, have to be measured, subtly and with respect for the right to freedom of expression, including the “freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers” (Article 10 of the European Convention on Human Rights), and “freedom and pluralism of the media” (Article 11 of the EU Charter of Fundamental Rights).

Together with these three main elements (recognition of political relevance, delimitation of terminology and balancing of legal interests that may be in conflict), the Plan is based on four pillars, including:

  1.  Improving the capacity of the EU to detect, analyze and debunk disinformation through, inter alia, the strengthening of Strategic Communication Special Groups (in line with the precursor focused on the East, and in particular, Russia) of the European External Action Service and Delegations in the countries neighboring the EU.
  2. Strengthening coordinated and joint responses to disinformation, through the new Rapid Alert System (which will operate in real time and will be up and running by March 2019) and working closely with existing national networks.
  3. Mobilizing the private sector by highlighting the continuous monitoring the Commission intends to implement from the Code of Practice on disinformation (September 2018)[12]signed by companies like Facebook, Google and Twitter, which includes agreements about ad placement, political advertising, the integrity of services and empowerment of consumers and researchers, and
  4. Raising awareness and improving societal resilience to disinformation, with campaigns about the negative effects of it, and support for independent, quality journalism and research, among others.

Despite its obvious limitations, the Plan could be successful if Member States, platforms and the media deliver on their commitments

The Plan is not perfect. To begin with, it has modest resources and lacks a clear and proven understanding of the effects of the phenomenon it aims to combat. In fact, the Plan expects to have a budget for strategic communication on disinformation in 2019 of 5 million euros (which represents a substantial increase over the previous year but remains far from the amount of resources used for the purpose in other regions), and it calls for Member States to strengthen their own national plans to complement measures for community action. In addition, there are few studies on the impact of disinformation on elections. It is not known exactly how, in what way and to what extent a political disinformation campaign could sway the popular vote. The Plan doesn’t shed light on this; it keeps quiet. Furthermore, and in line with the conceptual precision proposed by the Plan, it would be necessary to better define disinformation in contrast to similar phenomena such as propaganda, partisan communication, or involuntary and widely spread bad information, which are also currencies frequently traded these days. As often happens with community initiatives, the Plan essentially lacks precision in some of its points (for example, the suggestion to digitize the current electoral regulation on election silence, publication of surveys, etc.) and depends, probably too much, on the results offered by the self-regulation of relevant players, such as online platforms or media[13].

And yet, it could work. First, it is no small task to openly acknowledge the existence of the problem (disinformation in electoral campaigns), give it importance, warn against the corrosive effects it could have on democracy and react, decisively, against what constitutes undeniable and unacceptable interference with state power, determined to undermine the European project. Secondly, the Plan includes a series of actions that are internally consistent, and continue in line with the EU’s political decision to take on the fight against disinformation with a pluralistic approach. It focuses on the involvement of various relevant actors (and not exclusively on the actions of public authorities), and on cooperation, education and self-regulation versus regulation, which would prevent or delay all possible outcomes because of the complexities and risks involved. Thirdly, its emphasis on the need for cooperation and collaboration among Member States deserves positive assessment. Organization of the 2019 European Parliament elections is a national issue, and the phenomenon of disinformation is both a local and global issue at once. And last, but not least, the Plan has been built around previous commitments—ranging in value but all important—such as those achieved with online platforms like Facebook, Google and Twitter. These agreements include measures to ensure bots cannot be confused with humans, a commitment to publish annual compliance reports and collaboration with researchers and scholars of the phenomenon, and with different organizations and media companies to strengthen fact-checking.

This is the situation, and at a time when the European Union is not exactly in its most glorious state, the Plan deserves luck and strong support from Member States, companies and citizens of the European Union in order to be successful. There is a lot at stake, but at least in this case, it cannot be said that community institutions have not done their share of the work.

Pedro Peña is a lawyer with vast experience in telecommunications, audiovisual and internet law. He has been general counsel and secretary to the Board of Jazztel and Vodafone. He also worked in the public sector, as secretary general of RENFE. He is an Attorney of the Spanish Parliament, holding different positions in the Congress of Deputies, discontinuously, from 1986 up to now. Mr. Peña is Associate Professor of IE, holds a Law Degree from Universidad Autónoma de Madrid and a Master of Laws (LLM) from Columbia University School of Law. His writings on digital and law are in

Note: The views expressed by the author of this paper are completely personal and do not represent the position of any affiliated institution.

[1] Andrus Ansip, Vice-President responsible for the Digital Single Market, said: “ We have seen attempts to interfere in elections and referenda, with evidence pointing to Russia as a primary source of these campaigns”. Access link here
[2] “When asked generally about elections in Europe, respondents most often answered they were concerned about elections being manipulated through cyber attacks (61%), foreign actors and criminal groups influencing elections covertly (59%), the final result of an election being manipulated (56%) or votes being bought or sold (55%).”  Special Eurobarometer 477: Democracy & Elections, September 2018
[3] “More than two thirds (67%) of Internet users were concerned the personal data people leave on the Internet is used to target the political messages they see, undermining free and fair competition between all political parties.” Special Eurobarometer 477: Democracy & Elections, September 2018
[4] Directive 2013/40/EU of the European Parliament and of the Council of 12 August 2013 on attacks against information systems and Directive 2016/1148/EU of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union
[5] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC 
[6] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Tackling online disinformation: a European Approach  COM/2018/236
[7] Joint Communication to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Action Plan against Disinformation.
[8] “The European Council also calls for measures to (…) protect the Union’s democratic systems and combat disinformation, including in the context of the upcoming European elections, in full respect of fundamental rights.” Access link here
[9] More than eight in ten respondents (85%) think that the existence of fake news is a problem in their country, at least to some extent. A similar proportion (83%) say that it is a problem for democracy in general.” 
[10] “When asked about their concerns on the use of Internet in the pre-election period during local, national or European elections: Almost three quarters (73%) of Internet users were concerned about disinformation or misinformation online (…).”
[11] Directive 2011/92/EU of the European Parliament and of the Council of 13 December 2011 on combating the sexual abuse and sexual exploitation of children and child pornography, and replacing Council Framework Decision 2004/68/JHA or Council Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law, among others.
[12] Code of Practice on Disinformation
[13] (…)but there is a much more mixed picture when it comes to government intervention. While almost two-thirds (61%) agree that governments should do more, it is striking that sentiment is much more in favour of action in Europe (60%) than in the United States (41%), where the issue of ‘fake news’ seems to have had the most impact.” Access document here