1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Twitter and QAnon: when should free speech be free?

Twitter has blocked thousands of QAnon accounts. But is that not a ban on free speech? Matthias Kettemann from the Leibniz Institute for media research discusses the complex domain of rights and freedoms online.

Twitter sperrt QAnon-Plattformen
Image: AFP/O. Douliery

Deutsche Welle: Mr. Kettemann, should social media platforms be able to decide what users are allowed to say?

Matthias Kettemann: Social media platforms may seem like public spaces – after all, we meet our friends there and exchange opinions. But they are privately-owned; therefore it’s the companies’ private rules that primarily regulate what users can say online.

Matthias Kettemann | Leibniz Institute for Media Research | Hans-Bredow-Institut Hamburg
Matthias Kettemann sees opportunities in the fact that trust in public service media is growing during the COVID-19 pandemic Image: HBI 2020

Users enter into a contract with the platform and the platform decides what’s OK and what’s not. For instance, a "dog network" could decide not to allow pictures of cats, and vice versa, but that’s not the whole story here.

First, companies are obliged to delete certain content all across the world: content that is in violation of international law, such as the promotion of genocide, calls for terrorism, qualified hate speech.

Secondly, as companies become increasingly “public” – that is, develop into communication spaces essential for political debate within a given social context – the platform may become a semi-public or even public sphere, and will no longer be able to set its own rules, as it becomes too important as a discourse space.

Read more: Twitter takes down accounts linked to QAnon conspiracy theory

Populists use social platforms to spread fake news and mount hate campaigns, which can pose a threat to peace and even threaten lives. Doesn’t the state have a duty to protect citizens in this case?

Absolutely. All states have an obligation to refrain from violating the right to freedom of expression in the digital environment, but they also have a positive obligation to protect human rights and to create a safe and enabling environment for everyone to participate in public debate and to express opinions and ideas without fear, including those that offend, shock or disturb.

National laws don't cease to apply once you're online. Whenever states give private actors, like such companies, the power to police parts of the Internet – by encouraging or forcing them to delete certain content – states have to ensure that they include effective oversight mechanisms. However, there are also some countries where Internet freedom is in terrible shape, where it is the state itself which purveys fake news, and is responsible for hate speech against a minority, and which uses online tools to stalk and attack bloggers and journalists.

QAnon Protestors in Germany
Among conspiracy theorists in Germany, there is a growing number of QAnon followers Image: picture-alliance/ZUMAPRESS/S. Babbar

What options do we have for responding to hate speech or disinformation?

That's a great question. We have to differentiate between hate speech and disinformation. Qualified hate speech, such as specific calls for violence, have to be deleted and the speakers prosecuted. However, in democratic societies, speech that shocks, offends and disturbs the public is also protected in public spaces.

In private spaces, like on social platforms, the rules can be stricter. We have a right to get angry about hateful content. Counterspeech might be an option.

As for disinformation, platforms have to react differently depending on the kind of inauthentic content that is spread and the intention of the author. Platforms are not arbiters of truth, but they have recognized – not least during the coronavirus crisis and the current global debate on racism – that they have a growing responsibility for the online communication spheres they have created.

Platforms have a number of useful instruments at their disposal: they can fact-check posts, they can add notices if a picture or video has been manipulated, they can provide additional context or links to reputable sites, they can de-monetize problematic content, they can disallow sharing content judged to be incorrect, and they can suspend users.

Read more: Opinion: Europe must not trust US with data protection

QAnon protest
The QAnon movement has gained traction all around the world, with most adherents still based in the US Image: picture-alliance/AP Photo/M. Rourke

How problematic in your view is the fact that private firms like Facebook and Twitter have long ceased being simply hosting services for user-generated content?

In regulating spaces for public discourse, states have to keep in mind that different platforms have different functions. Some companies act as access providers by allowing users to access social media sites without a data plan. Others host user-generated content, aggregate information and enable searches, and use algorithms to recommend certain content. Depending on the function, the responsibilities of platforms vary. The higher the risks, the tighter the obligations of companies.

All companies have to meet certain minimum standards. One great place to start is the Santa Clara Principles on Transparency and Accountability in Content Moderation, a collection of principles that platforms can commit to. In a growing number of states, platforms are also obliged to publish transparency reports.

Platforms should be encouraged to be as transparent and open about the algorithms and rules they use to moderate the content. Independent review bodies for moderation decisions, such as Facebook’s Oversight Board, are an important step in the right direction. Another one is solidifying internal compliance structures and carrying out human rights impact assessments.

The coronavirus crisis has demonstrated the popularity of quality media. How do you see the future of journalists as “gatekeepers” of information?

Gatekeeping is a delicate task, as is defining which media are "quality media." Is a well-researched video by a YouTuber more or less "quality media" than an article in an established newspaper stirring up resentments against minorities?

Quality assurance and compliance structures are important but must be designed in a way that reinforces pre-existing power structures and undercuts the democratizing potential of the Internet and Internet-based journalism.

Schüsse in Pizzeria Comet Ping Pong
The so-called Pizzagate scandal in Washington DC 2016 was based on a fake news report shared by the movement that would later become QAnon Image: picture-alliance/dpa/J. Lo Scalzo

Quality journalism is incredibly important. Study after study confirms that readers trust reputable news organizations. We now have to figure out how to support excellent journalism in the platform economy. Public service media continue to exercise a key role – in many, but not all countries – in constructing a common discourse space and providing the facts on which public debates and – ultimately – social cohesion are based.

Readers and viewers exhibit high levels of trust regarding public service media. The challenge is how to enable quality content to reach platform users. Giving platforms a role as gatekeepers in checking content before publication does not scale. I think the valuable content produced by public service media has to be more strongly present on platforms and on the Internet. Legal hurdles have to be reduced, all the while keeping the rights of private media companies in mind.

One big search engine provider has started, in a number of countries, to pay quality media for content included in its services. Other online companies cooperate with reputable news organizations in fact checking.

About Matthias Kettemann: Heading a team that explores the rules that shape how we communicate online, PD Dr. Matthias C. Kettemann, LL.M. (Harvard), is a senior researcher at theLeibniz Institute for Media Research in Hamburg. He is a lecturer in International law, Internet Law and Legal Theory at the University of Frankfurt, visiting professor for International Law at the University of Jena, project lead for International Internet Law at the Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin, and project group leader for Platform and Content Governance at the Sustainable Computing Lab of the Vienna University of Economics and Business.

His latest book on the normative order of the Internet has recently been published with Oxford University Press.