The creation of a priority channel for reporting the sharing of sensitive content and requesting its withdrawal is a courageous, responsible and timely initiative.
Author: Pedro Peña, Attorney of the Spanish Parliament and Associate Professor IE Law School
The frequent publishing and sharing of sexually explicit or sensitive images on the Internet often leads to tragic ends, with the most vulnerable members of our society hardest hit. In order to battle this increasingly prevalent event, the Spanish Data Protection Agency (AEPD) has built a new priority channel for reporting the illegal sharing of sensitive images and the rapid removal of said content from the Internet.
There are several reasons why the agency has decided to proceed in this way. Perhaps the most striking and tragic case to propel the AEPD in this direction was the suicide of an IVECO worker in Madrid, after a sexually explicit video in which she appeared was shared among her colleagues. The main reason is, in fact, irrelevant. The most important point to consider is that this independent Spanish privacy authority has adopted a new initiative which is worthy of recognition. In the following article, I analyze the aims and processes of this new priority channel.
How does the new procedure differ from the existing process?
The new channel is designed for exceptionally delicate situations, that is to say, images that display content of a sexual nature or violent acts, and put the rights and freedoms of those involved—particularly victims of domestic violence or minors—at great risk. It differs from the ordinary procedure for the exercise of one’s right to erasure,[1] which is available to anyone affected by the dissemination of images without their consent.
With this procedure, the person concerned must firstly approach the provider of the service (such as Facebook, Google or YouTube) where the images appear. If, after doing so, one month has passed since the complaint was filed and the images have not been removed,[2] the affected parties may file a complaint on the AEPD’s website, attaching documentation proving that they have already requested the removal of the images from the online service provider.
The course of action via the newly created priority channel differs to the usual procedure in at least four ways:
- It is not designed for all situations, but only for those cases that are extremely sensitive;
- It is not necessary to contact the Internet service provider in advance to request the removal of the images in question, as one can go directly to the AEPD;
- There is no time limit for resolving the withdrawal request, but it is dealt with by the AEPD generally as a matter or urgency; and
- Not only may the person or persons who appear in the illegally published images report their dissemination and request for their withdrawal, but this may be done by anyone.
Ultimately, the objective of the priority channel is to establish an avenue for complaints pertaining to exceptional and especially serious situations. In particular, this relates to those that concern unlawful dissemination and ‘sensitive’ content, which should be addressed as a matter of priority[3]. This is to enable the agency, as an independent authority, to adopt urgent measures such as order the precautionary removal of images, which Internet service providers must comply with immediately, without prejudice to the final decision taken or the subsequent penal procedure adopted by the agency for the purpose of determining the appropriate liabilities.[4]
How should claimants file a complaint?
With regards to how to file a complaint through the priority channel, the AEPD stipulates that the claimant must describe their personal situation and the circumstances in which the non-consensual dissemination of sensitive content has taken place, as well as stating if they are a victim of domestic violence, abuse, sexual assault, harassment, or whether they belong to a particularly vulnerable group. They must also specify the web address(es) where the images have been published and are being disseminated, or clearly identify the social profile through which they are being shared. In addition, they will be asked to provide additional information, such as whether or not there has been a report to the police.
Why was the new procedure implemented?
In my opinion, the decision of the AEPD to create this priority channel appears to have been taken for the following reasons:
- The inadequacy of the usual procedure to deal with serious and, unfortunately, frequent situations.[5] This inadequacy derives from various factors, such as the fact that the person concerned does not make a report due to a lack of awareness of his/her right, for fear of reprisals or unwanted disclosure, or they may be frozen by panic or fear. In addition, the processing of the complaint may not be fast enough and the images of sexual or violent content can remain on the Internet after they have been reported and are even shared again, which results in a violation of the fundamental rights of integrity, privacy, honor and self-image.
- The condemnation that the illicit dissemination of images of violence and aggression deserves, together with the protection that must be given to those affected: women, children and other particularly vulnerable groups, such as disabled persons, people with serious illnesses, people at risk of social exclusion etc., provided no relevant third party rights or interests exist, which should be taken into account when removing these images from the Internet.
- The supervisory role and power to control the application of GDPR and the Organic Law on Personal Data Protection and guarantee of digital rights entrusted to AEPD, by virtue of which it may ‘impose a temporary or definitive limitation on the processing [of personal data], including its prohibition’.
- The willingness to exercise this function in an active, measured and fair manner, advancing against today’s current climate of countless unfortunate cases, and to assume this responsibility in a public capacity, rather than leaving the initiative and the solution to the problem, at least in the first instance, in the hands of digital platforms and private companies.
The creation of the priority channel, communicated on 24 September, is part of a general framework consisting of an agreement signed with the Ministry for the Presidency, Parliamentary Relations and Equality) and five action protocols (signed with the Ministries of the Interior, Labor and Education, the Prosecutor General and the General Council of the Legal Profession). This framework includes prevention and awareness-raising measures, training and, most importantly, information on the administrative, civil, and criminal consequences of the dissemination of sensitive content, both for companies and for citizens.[6]
Room for improvement
The decision to create this channel clearly merits support and should be positively welcomed. However, some critical remarks concerning the priority channel can be made, which are as follows:
- It would be advisable to be more precise in relation to the objective of the channel and the kind of behavior in these ‘sensitive’ images that can be denounced; it seems clear that they include actions or expressions of a sexual nature or intention, acts of violence, abuse, aggression and harassment, although perhaps other conducts or similar actions could also be added, such as taunts or slurs.
- It would also be advisable to be more precise with regards to who can benefit from this procedure, that is, the groups who seek protection from the priority channel. It’s clear that particularly women (victims of gender violence), minors, persons with disabilities or serious illnesses, or people at risk of social exclusion are included, and in the case of images with sexual content, the priority channel applies to both men and women. But it fails to include, for example, a reference to the elderly. It would be relatively straightforward to include this section of the public with the aforementioned groups to give them the same level of protection offered by the priority channel.
- It may be presumed, in the event that the person reporting the unlawful sharing of sensitive content and requesting the removal thereof is not the person concerned, that the person concerned must have sufficient knowledge of the circumstances of the case in question. This should be emphasized. Otherwise, it is possible that complaints are filed that, although likely to be well intentioned, concern cases in which there is in fact consent or in which the content is intended as a joke or a performance.
- An unclarified practical problem, at least regarding the information provided by the AEPD, is the actual availability of the channel. Does it operate twenty-four hours a day, seven days a week? On the contrary, it notes that there are specialized teams of professionals at both ends of the line—both within the AEPD and the Internet companies where these images usually appear, such as Google, Facebook and YouTube.
- A second, more substantial practical problem concerns WhatsApp, which offers an end-to-end encrypted message and call service whereby no one can read, see or hear private messages and conversations—not even WhatsApp itself. This obviously represents a clear limit to the potential intervention of the AEPD on content distributed through this private social app.
In short—pending information on its first results and while awaiting the creation of specific and coordinated tools regarding this issue at a community level in the medium term[7]—it can be said that the creation of a priority channel for reporting the sharing of sensitive content and requesting its withdrawal is a courageous, responsible and timely initiative. It is also worth highlighting the success of its design, which has a defined focus—centering on very serious behavior and groups that deserve special protection—and adopts a proportionate approach, as demonstrated by the precautionary removal measure, the presence of rights worthy of immediate protection, and the absence of other rights in possible conflict (such as freedom of expression or information). On the other hand, by leaving the decision to remove photos or videos in the hands of the AEPD and not in those of digital platforms or Internet service providers,[8] problems relating to automated deletion and conservative decision-making are avoided.
Finally—and without placing all the blame on the AEPD’s shoulders—it would have probably been desirable for the Spanish Parliament to introduce some kind of intervention, participation or commitment in regard to this issue, and this would have reinforced the general framework in which the initiative to create the channel was presented. But it is already common knowledge that, due to widely publicized factors, the current year has not been particularly productive for the public body that represents the Spanish people.
Pedro Peña is a lawyer with vast experience in telecommunications, audiovisual and internet law. He has been general counsel and secretary to the Board of Jazztel and Vodafone. He also worked in the public sector, as secretary general of RENFE. He is an Attorney of the Spanish Parliament, holding different positions in the Congress of Deputies, discontinuously, from 1986 up to now. Mr. Peña is Associate Professor of IE, holds a Law Degree from Universidad Autónoma de Madrid and a Master of Laws (LLM) from Columbia University School of Law. His writings on digital and law are in sociedadgigabit.com.
Note: The views expressed by the author of this paper are completely personal and do not represent the position of any affiliated institution.
[1] Article 17 of the Directive (EU) of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) and Article 15 of Organic Law 3/2018 of 5 December on the protection of personal data and the guarantee of digital rights.
[2] https://www.facebook.com/help/428478523862899 https://support.google.com/legal/troubleshooter/1114905?hl=es
[3] The term ‘sensitive’ is neither a legal concept nor is it equivalent to ‘personal data which, by their nature, are particularly sensitive’, which is dealt with in Article 9 of the General Data Protection Regulation (GDPR) and refers to: ethnic or racial matters, political opinions, religious or philosophical beliefs, genetic data, and so on.
[4] Articles 127 et seq. of the Royal Decree 1720/2007, which approves the Regulation implementing the Organic Law 15/1999, of 13 December, on the protection of personal data.
[5] http://www.violenciagenero.igualdad.mpr.gob.es/violenciaEnCifras/estudios/colecciones/pdf/Libro_18_Ciberacoso.pdf;
http://www.europarl.europa.eu/RegData/etudes/STUD/2018/604979/IPOL_STU(2018)604979_EN.pdf
[6] https://www.aepd.es/media/estudios/consecuencias-administrativas-disciplinarias-civiles-penales.pdf
[7] Today, there are various European proposals on how to mitigate the threats posed by the misuse of digital technology to the rights and freedoms of citizens and to their coexistence within democratic societies. One example is the UK Government’s ‘Online Harms White Paper’, which advocates for the imposition of a specific duty of care on online platforms and of enforcement powers for an independent regulatory body. Link here
[8] This is the route taken in Germany with the 2017 NetzDG, which is aimed at Internet service providers designed to share content and make it available to the public with 2 million or more users (like Facebook, Twitter, or YouTube), who are responsible for removing or blocking any illegal content on social networks, understood as that which involves the commission of an offence under the Penal Code, concerning, for example, state security, public order, honor and privacy, sexual freedom or hate speech. Link here