Our increased reliance on digital platforms during COVID-19 and beyond

The COVID-19 crisis has seen videoconferencing move beyond the merely useful to become an absolutely essential tool for business and individuals. What are the ramifications of that change on privacy, security and trust?

Authors: Jimena López-Navarro, LLB-BBA student, IE University, and Dr. Argyri Panezi, Assistant Professor of Law and Technology, IE University

“You cannot trust a code that you did not totally create yourself,” said Ken Thompson, co-creator of UNIX, in his Turing Award speech in 1983.[1] In other words, there is no amount of verification that can protect you from using untrusted code. Thompson explains that most large companies are vulnerable to hacking, which means that their users are vulnerable as well. Users, however, face a dual risk: the risk of third parties hacking the service they’re using, and the risk of breaches by the company itself. Usually, users are unfortunately unaware of how and why a company processes their data.

This has been illustrated lately in the case of Zoom Video Communications, when the videoconferencing platform came to the center of public attention amidst the Coronavirus outbreak. A number of articles in the popular press brought this issue to public attention (see Shira Ovide for the New York Times: Zoom Is Easy. That’s Why It’s Dangerous, and Lily Hay Newman for Wired Magazine: The Zoom Privacy Backlash Is Only Getting Started for example). However, the privacy and security vulnerabilities of this platform had already been exposed before COVID-19.

Increased scrutiny of the most popular platforms

With the rise of the COVID-19 pandemic, the use of teleconferencing services has risen exponentially—especially the use of Zoom. Zoom has grown disproportionately since the global pandemic began, now ranking number one in the US. Recently, it has come under increased scrutiny for its lack of security and privacy measures.

The company faced two class action lawsuits within the past month. The first was initiated by users, for the sharing of data with Facebook. The second, initiated by a shareholder, claims that the company made false and/or misleading statements; failed to disclose inadequate privacy and security measures; and offered a service that was not end-to-end encrypted. The shareholder’s complaint was made available online here.

How are the vulnerabilities affecting users? It appears that attackers have been able to remove attendees from meetings, hijack webcams and shared screens, and send spoof messages. Furthermore, due to the lack of end-to-end encryption, parties other than participants to the meeting, including Zoom, can access the video feed and the audio of the meetings.

“Zoom-bombing” refers to hackers joining video conferences to display pornographic and hate photos. Zoom’s troubles do not end here—as already mentioned, it’s also being sued for allegedly selling users’ personal data to external companies, namely Facebook, without informing users.

Evidence apparently suggests that Zoom’s software notifies Facebook whenever a user has entered the platform and provides the user’s personal data, including what device they are using Zoom on, its model and its unique advertising identifier in exchange for payment. To make matters worse, all these issues cannot be solved by simply uninstalling the app, as it can be reactivated without the user’s permission—another one of Zoom’s flaws.

Amidst all this pressure, at the beginning of April Zoom’s founder Eric Yuan, recognized the companies’ responsibility, pledged robust vulnerability disclosure and announced plans to fix the flaws. On April 15 the company announced the hiring of cybersecurity experts to reboot Zoom’s bug bounty program.

Is this just a price we have to pay?

Was Ken Thompson right? If we cannot fully trust any code or system which we haven’t written or built ourselves, should we just accept that our increased reliance on digital systems means we must basically forgo privacy? Should we simply accept that we cannot trust a digital infrastructure provided by private and/or public actors?

This is perhaps an extremely risk-averse view—the equivalent of never stepping outside out of the fear of being run over by a car. Unless one wishes to remain isolated in the 21st century, it’s impossible not to use the many digital platforms, software and hardware available for daily tasks. This can range from an online search (googling a word) to writing an assignment on Microsoft Word.

We clearly face a trade-off between privacy and security on the one hand, and functionality and efficiency on the other, especially as we are relying on a global digital infrastructure regulated at national or regional levels. EU citizens, for example, enjoy a high level of privacy protection since the introduction of the GDPR, which provides for a number of concrete rights over the access to and processing of our data.

If we cannot fully trust any code or system which we haven’t written or built ourselves, should we just accept that our increased reliance on digital systems means we must basically forgo privacy? 

The push for improved security

On the bright side, data protection and cybersecurity seem to have become a high priority in both the regulatory space and the marketplace—and particularly in civil society, which is consistently pushing for better privacy and security frameworks. A number of notable organizations are advocating for the privacy and security of our online systems, especially in response to the current epidemic.

Among them, there is the joint statement published on April 2, signed by 100+ organizations, including Access Now, Human Rights Watch, Amnesty International, Privacy International and others, which urges governments to “show leadership in tackling the pandemic by respecting human rights when using digital technologies to track and monitor people.”

There is also the Digital rights in the COVID-19 fight page by Access Now and EFF’s guide on digital rights and COVID-19, alongside similar initiatives by Human Rights Watch. The latter includes an overview of the Human Rights Dimensions of COVID-19, which addresses the sharing of sensitive data. The importance of our civil society space cannot be overstated in current times. Meanwhile, civil society also suffers from the heightened digital dependence. As Lucy Bernholz argues, Digital civil society’s dependencies come into stark view at times when our own dependency on digital systems is also heightened.

Privacy and security in online platforms

Still work to be done

On the darker side of things, regulatory and legislative protection remains fragmented. There is usually a lag when it comes to policy responses to technological effects or risks. At the same time, the convenience that digital systems have brought into our lives has been alluring. Tim Wu has previously warned we should be wary of the creeping tyranny of convenience and its consequences. When it comes to our digital systems, our readiness to give up privacy and personal data is a chief concern.

Until policy-makers and legislatures are able to create a fully comprehensive system of laws to protect global citizen’s rights in a digital era—which will probably take time and further effort—it’s up to the users of these services to determine which platforms and systems we use.

As users we can opt for software and systems that are transparent, and systems that disclose vulnerabilities and promote collaboration to enhance security. As users, we can also pressure companies to be transparent and invest in security. Digital platforms are accountable for our privacy, data protection and consumer protection more generally. Moreover, we may also decide to what extent we want to expose ourselves to existing privacy and security flaws—we might not require the same level of privacy for all our personal, daily activities. Individuals and communities tend to conduct their own cost-benefit analysis and decide whether it’s in their interest to expose themselves to a potential privacy and/or cybersecurity threat for any given activity online, similar to our decision-making patterns offline.

If you’re merely using Zoom to chat with a friend, you may not consider the current security threats important. However, if you are the British Prime Minister and decide to chair a Cabinet meeting on Zoom, you may want to think twice about your decision!

When it comes to our digital systems, regulatory and legislative protection remains fragmented. There is usually a lag when it comes to policy responses to technological effects or risks. 

Who do you trust?

From our own cost-benefit thinking to the number of online and offline resources helping us minimize privacy and security risks, we can all find some tools to navigate the risks when using digital platforms and other products. Palante Technology Cooperative recently published a very helpful guide for self-defense against Zoombombing.

Which other sources and guides can you trust to be helpful? Again, it all goes back to the tricky question that Thomson underlined: Who do you trust when it comes to advice about online privacy and security if you do not have the expertise yourself? Open source software, for example, can be peer-reviewed, and as such provide additional guarantees of transparency, accountability, and security as it is regularly audited. A comprehensive answer to the question of trust, however, is complex and cannot be answered easily in any short blogpost. That said, we can conclude by listing a few factors to look out for when researching trusted sources to navigate our online and offline risks.

Firstly, we need to be wary of the motivations behind our sources (market incentives, government interests, advocacy groups’ targets and so on). We should gather information from diverse sources to fact-check advice but also compare features of multiple platforms ourselves (for example, there are many alternatives to Zoom and various circulating lists of suggestions). Secondly, it’s helpful to seek expert opinions when it comes to questions that need scientific expertise. As mentioned above, there is a way to have more trustworthy systems, which can also be more secure and privacy-enhancing so long as developers are transparent and prioritize the rights of their users. Open source software in particular is a guarantee for transparency, openness and collaboration. In this sense, it is telling that Thomson, the co-creator of UNIX which is proprietary software, admits that you cannot trust a code you did not totally create yourself. Lastly, it’s important to remember that misinformation tends to spread with unprecedented speed online, especially at times of crisis when many self-proclaimed experts rush to give “solutions.”

Ken Thompson’s statement on trusting code may have been an extreme one, but it’s certainly worth keeping in mind when thinking about modern cybersecurity. Being aware of the risks and setting your own “filters” will help protect you from yet another invisible threat in these most testing of times.

 

Shows the picture of Jimena Lopez NavarroJimena López-Navarro is a second-year student LLB-BBA student at IE University. Born in Spain and raised in Brussels,  Jimena is passionate about Human Rights law, particularly in the area of technology, and is looking to pursue a career in the field.

 

Dr. Argyri Panezi, Assistant Professor of Law and Technology at IE University, is an expert in law and technology and intellectual property. She specializes in Internet law and policy, intellectual property law, with an emphasis on digital copyright, as well as data protection, intellectual goods management, automation, machine learning and AI. Her current research focuses on digitization and AI.

 

Note: The views expressed by the author of this paper are completely personal and do not represent the position of any affiliated institution.


[1] (Thompson, K. (1984). Reflections on trusting trust from Communications of the ACM, 27(8), 761-763.)