EU Member States scramble to stand up to fake news

@LawAhead

Alongside the European Union’s efforts to research, educate and raise awareness about fake news, its member states are beginning to legislate against it. Though the results to date have been far from outstanding, there are glimmers of hope on the horizon.

Author: Pedro Peña, Attorney of the Spanish Parliament and Associate Professor IE Law School

When looking at how European states deal with the complex global phenomenon of online disinformation differently, the venerable words of Stuart Mill come to mind. He maintained that “everything that makes life worth living for anyone depends on restraints being put on the actions of other people,” followed by the question: “What should these rules be? That is the principal question in human affairs.”[1] Although Member States almost unanimously agree that disseminating misleading information online for economic gain or to cause social damage is a real problem for democracy[2], they are divided when discussing who has to act and what they must do to overcome its effects. If we look at what has happened in Germany, France, the UK and Spain over the last two years, the different approaches to this issue are clear to see.

Although Member States almost unanimously agree that disseminating misleading information online for economic gain or to cause social damage is a real problem for democracy, they are divided when discussing who has to act.

 

Legislating for social networks in Germany 

In Germany, a Network Enforcement Act (NetzDG) was passed by a large majority of the Bundestag and entered into force on October 1, 2017, signaling “the end of the law of the jungle on the internet,”[3] in the words of its main advocate, Justice Minister Heiko Maas.

The law is directed at service providers that operate internet platforms with over 2 million users, whose purpose is to share content and make it publicly available. It affects major social networks (like Facebook, Twitter or YouTube) but not businesses or platforms that produce journalistic or editorial content, or disseminate specific information, such as LinkedIn. The objective of this law is for illegal content to be removed or blocked from social media as soon as a complaint is issued. Content is deemed illegal if it falls under one of the many offenses listed in the Penal Code related to state security, public order, honor and intimacy, sexual freedom, incitement to hatred or dissemination of unconstitutional symbols or groups. All of this mainly falls into the hands of the social network itself.

By law, social networks must ensure that complaints are processed effectively and transparently and, as such, must be dealt with on a case-by-case basis. If the content is deemed illegal, it has to be blocked or removed within the 24 hours following the complaint, provided that there is sufficient evidence. If it is not immediately obvious, the social network has 7 days (or longer in certain circumstances) to remove it. If, on the other hand, the social network dismisses the complaint, the content remains online. If the government disagrees with the social network’s decision, the case could be taken to court to rule the reported content illegal.

The law also imposes a second obligation upon social networks. If they receive over 100 complaints regarding illegal content per calendar year, they must publish a biannual report about its complaint-management strategies, both in the government journal and on their own website. The report must explain and outline efforts to reduce users’ illicit activity, their systems for making formal complaints, and their criteria for deciding whether to block reported content.[4]

Breaking this law incurs a minimum of €500,000 and a maximum of €50 million in regulatory fines.

Source: Kirsten Gollatz, Fellow Martin J. Riedl and Jens Pohlmann from the Alexander von Humboldt Institute for Internet and Society (HIIG)

 

France blocks a law against manipulating information during election periods

As of yet, France does not have legislation in place to fight online disinformation. President Emmanuel Macron and his parliamentary majority plan to fight “the existence of massive campaigns using online communication services to spread false information with the intent to modify the normal course of the electoral process.”[5] However, these plans have been stalled, despite the National Assembly approving a legislative proposal on July 3 (there is also a second proposal which affects the presidential election), which included the following measures:

First, the introduction of a simplified urgent appeal before a civil judge, which can be enforced during the so-called “electoral period”, (between three and four months before the day of the election or referendum). Within the 48 hours following the appeal, the judge can put in place adequate measures to stop false information that may affect the ballot (or “any allegation or accusation that a fact is inexact or misleading”) from being deliberately and artificially disseminated online.

Secondly, the attribution of new powers to the Conseil Supérieur de l’Audiovisuel (CSA). Among these powers, the CSA will be able to interrupt, suspend or cancel the emission of television services from foreign states that may impinge on fundamental social values or aim to destabilize its institutions during an electoral period. The CSA will also have the power to reject requests to terminate licensing contracts for audiovisual services or even to rescind the contracts themselves in the case of a threat to fundamental rights or values.

Lastly, the implementation of new duties for internet platforms, access providers and content distributors, who must publish information regarding the measures they employ against false information and all data regarding their advertisers.

However, these measures were rejected without further consideration by the French Senate.[6] Since the joint committee was unable to reach a consensus in September, the proposal approved by the Assembly must be processed again. This time, it will be fast-tracked, although there is no guarantee that it will be passed.

 

A provisional report on disinformation and fake news is passed in the UK

The UK also doesn’t have any legislation in place, nor does it show signs of implementing anti-disinformation measures in the immediate future. Rather, as is relatively common in the British parliamentary system, there was an investigation inquiry that led to the publishing of a special report.[7] On July 24, the House of Commons’ committee for Digital, Culture, Media and Sport published its general remarks and conclusions, which appear to be a list of key concerns about fake news, namely that spread via social media, and urgent steps needed to be taken by the Government and independent regulatory authorities. It is not binding, and by no means the final report, with the Committee having announced that a definitive version will be published in fall 2018. However, its contributions to this topic should not be ignored, as it is the result of a long process of gathering and analyzing information, the result of which expresses the concerns of a considerable cross-party majority of parliamentarians.

First of all, the report asks that the Government agree upon operative and consensual definitions of the many aspects that distinguish information as fake in order to achieve clarity, stability and legal soundness. The Government must also support the investigation into how online disinformation is created and spread.

Secondly, the report calls for the law to be updated with principles and rules that are flexible enough to allow it to keep up with rapid technological changes. The report also calls for a complete reform of electoral law, which is currently deeply unsuited to the digital age.

In any case, the core message of the report criticizes the lenient and minimal regulation, and the elusive and selfish practices of the digital platforms that operate social networks. According to the report, since these technology companies use algorithms and human intervention to continuously change what their users see, it is unclear how little or how much they have to do with the content circulating on their platforms. It follows the reasoning that, though these platforms do not come close to journalistic companies that respond to editorial content created by their “publishers,” they must have a clear legal responsibility to act against illegal and harmful content that circulates on their servers. Finally, the report recommends and equally obliges social media providers to reassess their algorithms and security measures, to act against fake accounts that degrade their platforms, and defraud advertisers, as well as informing users of their right to privacy.

Some have seen this report is a wake-up call. They claim it’s a shock for society and its political representatives, compelling them to act without delay.[8] Others are more cynical, pessimistic and less hopeful. They believe that it is simply a sign of public authorities’ impotence in the face of technology giants. It’s perceived to be a battle that the UK is losing, unlike its neighbors on the other side of the Atlantic.[9]

 

Spain rejects a proposal to guarantee “truthful information”

Out of the four countries examined here, Spain’s action against disinformation has been the least comprehensive. The only significant accomplishment is the motion suggested by the People’s Party (Grupo Popular) in Congress in early 2018 when Mariano Rajoy was still head of the Government.

The text was vague, repetitive and technically weak. Its main and most controversial section suggested that Congress should urge the Government “to come up with ways to determine the veracity of the information available to citizens online. They must then be turned into secure strategies to use the information’s ‘hallmark’ to identify it or disqualify it as potential fake news aimed at the citizen.” In addition, as they deal with threats to “social security and wellbeing, these methods and modes of action should be developed in public institutions specialized in collaborating with internet service providers, internet infrastructure providers, internet users, organizations and the press.”

However, on March 13, Congress rejected the plan after a debate involving some moderate interventions that demanded “deeper knowledge of what is being discussed” and “a major agreement, a national pact (…) to guarantee freedom and rights online, as well as security.” There were also other far less nuanced and more extreme interventions that referred to “a minefield that completely goes against fundamental rights,” “conspiracy theories and scaremongering to create a state of affairs favorable to social control” or “a first step towards controlling the web with (…) the everlasting primal inquisitorial desire to censor.”[10] Beyond that, there is nothing on the horizon other than a possible Defense Committee group that will work on disinformation campaigns.

Flash Eurobarometer 464 Report on Fake News and Disinformation Online by the European Commission (Fieldwork conducted in February; Published April 2018)

 

Conclusions

The differences between the four countries highlighted in this brief summary are significant. Their approaches, areas of interest, processes and content bear few similarities. This should not come as a surprise, considering that we are discussing Europe, a region of diversity and pluralism, and four of its major states. Each country’s constitutional and legal structure, culture, political situation, and level of public awareness on this problem have an impact. These factors explain and help us understand the current state of divergence.

That being said, we must focus on the positive outcomes of this analysis: three key points were agreed upon. Firstly, that online disinformation is a serious matter and needs to be addressed while maintaining respect for human rights. Secondly, that digital platforms need to do more to cooperate clearly and responsibly with public authorities. And finally, that citizens need to be educated about social and digital media. There is also a fourth point where the countries’ converge, although it is less clear. This point is relevant today, and will only become more so in the future. States need to intervene and adopt legislation to complement the European Union´s approach (self-regulation, training measures, digital and media literacy) in the fight against online disinformation. Only time will tell what happens, and which type of laws will materialize.

Pedro Peña is a lawyer with vast experience in telecommunications, audiovisual and internet law. He has been general counsel and secretary to the Board of Jazztel and Vodafone. He also worked in the public sector, as secretary general of RENFE. He is an Attorney of the Spanish Parliament, holding different positions in the Congress of Deputies, discontinuously, from 1986 up to now. Mr. Peña is Associate Professor of IE, holds a Law Degree from Universidad Autónoma de Madrid and a Master of Laws (LLM) from Columbia University School of Law. His writings on digital and law are in sociedadgigabit.com.

Note: The views expressed by the author of this paper are completely personal and do not represent the position of any affiliated institution.

[1] John Stuart Mill, “Sobre la libertad” (Aguilar, 1971)
[2]  “A large majority of respondents think that the existence of fake news is a problem in their country (85%) and for democracy in general (83%).” Flash Eurobarometer, Fake News and Disinformation Online, April 2018
[3] Linda Kinstler, “Can Germany fix Facebook”, www.theatlantic.com , 2 November 2017
[4] Every six months the platforms are required to report their findings. Here are the headline numbers: “Twitter received 264,000 complaints and blocked or removed 11 percent of the reported content; YouTube received 215,000 complaints and removed or blocked 27 percent of the reported content, most of which was hate speech or political extremism; Facebook received 1,704 complaints and removed or blocked 21 percent; YouTube removed nearly 93 percent of the content that needed to be blocked within 24 hours of receiving the complaint; In 637 cases Twitter took longer than 24 hours to remove content; Most of Facebook´s content deletion took place within 24 hours and 24 took more than a week.”
Lucinda Southern, “Before, it was a black box: Platforms report how they delete illegal content in Germany”, www.digiday.com, August 2, 2018.
[5] Benjamin Hue, “Loi contre les fake news: ce qui contient le texte du gouvernement,” www.rtl.fr, 13 February 2018
[6]La proposition de loi et la proposition de loi organique n´apportent que des réponses au mieux inefficaces, au pire dangereuses, en tout état de cause sans espoir de lever les incertitudeswww.senat.fr, 26 July
[7] House of Commons, Digital, Culture, Media and Sport Committee, “Disinformation and ‘fake news’: Interim Report,” 24 July 2018
[8]The Guardian view on the fight against fake news: neutrality is not an option,www.theguardian.com, 29 July 2018
[9]The fake news inquiry is a tale of British political powerlessness,www.wired.co.uk, 30 July 2018
[10] Diario de Sesiones de los Diputados, 13 March 2018