New EU content rules on disinformation online are too vague, expert warns
Platforms and search engines such as Facebook, X (formerly Twitter), and Google are required under the Digital Services Act (DSA) to combat disinformation since Friday. However, the exact implications of these rules for freedom of expression remain unclear due to lack of clear guidelines on identifying disinformation, according to Michael Klos, a legal expert and public administration scholar at Leiden University.
The Cabinet claimed that the new EU rules aim to "better protect citizens against disinformation, fake news, and violations of fundamental and human rights." Klos, who studies freedom of expression online, expressed skepticism about this claim. "Governments are making grand claims about tackling disinformation. Yet, there is no clear definition of what they classify as disinformation," the expert pointed out. "What expectations are governments trying to set for users? I'm not sure what they mean by better protection," he added.
Klos said that the DSA does not provide specific guidelines on dealing with disinformation. "There are no clear directives, like 'if you encounter this on your platform, you must act in this particular way,' for example." Under the new rules, platforms are required to conduct a risk assessment and take measures accordingly. "This could mean placing accurate information next to inaccurate content, labeling unreliable information, or even excluding certain things from search results," Klos explained. However, he said he believes it does not necessarily mean certain information will be deleted or users will be banned in case of disinformation.
Klos mentioned that the initial ambitions of the EU rules were much broader, with discussions about countering harmful content. However, this faced criticism because it remains difficult to define what constitutes 'harmful content.' "Many elements have been watered down due to the implications for freedom of speech,” he said. He described the DSA as "a watered-down version" of those original aspirations. Still, Klos said he expects that the new rules will lead to a surge in legal cases.
The Netherlands Institute for Human Rights suggested that the Cabinet should assist tech companies in tackling disinformation. Such assistance can prevent these services from overstepping and thereby limiting users' freedom of expression. The human rights institute emphasized that freedom of expression includes "information and ideas that might be viewed as offensive, shocking, or disturbing." However, they added that "when platforms use automated systems to review content, there's an inherent risk that context-sensitive expressions may be removed too hastily or not swiftly enough."
Tech giant Google expressed concerns that the new European rules might have "unintended consequences" for tech companies. The company warned that if it is compelled to share too much about its policies, malicious actors could exploit its services to spread harmful disinformation. Specifically, revealing the workings of the search engine's algorithm might allow these actors to manipulate it, pushing malicious sites to the top of search results.
While Google supports the goals of the new regulations and has updated its policies in line with the Digital Services Act, it emphasized the need for caution. Among their recent changes, they have enhanced clarity about ads shown to European users and increased transparency around administrative decisions, like content removal.
Meta, which owns platforms like Facebook and Instagram, expressed strong support for the new rules. The company believes the DSA "minimizes harm effectively, protects and empowers people, and upholds their fundamental rights," according to Nick Clegg, the company’s president of global affairs
X, previously known as Twitter, was unavailable for comment. Following its acquisition by Elon Musk, the company cut back on spokesperson staff. Inquiries are now automatically responded to with a poop emoji.
Reporting by ANP