Misinformation and Disinformation are Big Global Risks
The EU has Published Guidelines Ahead of the 2024 Elections
Misinformation and disinformation, particularly in the new age of artificial intelligence, are among the biggest global risks right now, particularly for democracies. The spreading of inaccurate, false, or misleading information online makes it much more difficult for voters to make informed decisions.
In the run-up to elections, such practices have the potential to manipulate and polarise public opinion, swing an election in a particular direction, or create uncertainty about official election results with significant consequences for public policy, populations, natural resources, and economic growth. A lot is at stake where policymakers face the challenge of finding the right balance between mitigating risks and safeguarding the right to freedom of expression.
2024 is a major year for elections around the world, and particularly so in Europe. The emergence of new challenges, such as the rise of artificial intelligence, has added a new layer of complexity. AI-based tech is now easily available and already so advanced in generating synthetic media that can be used in a harmful way to spread false information with deep fakes that impersonate candidates or fake news reports used to evoke an emotional response rather than to inform.
With AI now part of our daily lives, it is not possible to unscramble the egg. However, there is a balance to be struck so that new technologies such as AI can also be used as a force for good, a tool to help detect and debunk inaccurate, false, or misleading information, particularly on major online platforms and search engines. That requires robust regulation.
What is the EU doing?
In 2022, the European Union passed the Digital Services Act, which is now directly applicable across the EU. It is a significant legislation covering all online intermediaries and platforms offering services and goods across the EU Single Market. It has introduced a common set of new rules giving the EU the ability to prevent illegal and harmful activities online and the spreading of disinformation. Very large platforms and search engines, which pose particular risks in the dissemination of illegal content and societal harms, are now regulated directly by the European Commission. Such big tech companies that offer services within the EU are now expected to be more responsible and transparent, particularly regarding their algorithms. Non-compliance can result in tough new penalties imposed by the EU (up to 6% of global turnover).
This week, the EU published guidance for those designated by the EU as Very Large Online Platforms or Search Engines, such as Google, Facebook, and TikTok, ahead of the high number of elections taking place in the EU in 2024, including the EU Parliament elections in June. The goal is to ensure that very large online players “comply with their obligation to mitigate specific risks linked to electoral processes”.
Recommendations include the requirement for robust internal processes, the clear labelling of political advertising, the clear labelling of content generated by AI (such as deepfakes), and the promotion of official information on electoral processes.
Cooperation with the EU and national authorities, independent experts, and civil society organisations is expected before, during, and after elections. This includes the areas of foreign information manipulation and interference, and cybersecurity. Moreover, the publication of post-election review documents and providing an opportunity for public feedback are also recommended.
It is expected that the guidelines will be formally adopted and applicable by the end of April. The EU has also planned stress tests with the designated tech companies for the end of April.
Final thoughts
The publication of the latest guidance under the EU Digital Services Act ahead of EU elections this year shows that the EU Commission means business in ensuring that big tech does what is necessary to protect the integrity of the election process within the EU. This is particularly so when one of the biggest global risks right now, particularly for democracies, is the damage that can potentially be caused by misinformation and disinformation at election time.
Without hands-on regulation and penalties with real teeth, there is the risk that big tech would put short-term profit ahead of corporate responsibility, meaning less than adequate measures in place to mitigate the risk of misinformation and disinformation online. Given their power in information dissemination, that could prove fatal for voters if they are unable to distinguish between fact and fiction.
There are good regulations and then there are bad regulations that do more harm than good. There is always the risk of overregulation, which upsets the right regulatory balance, the sweet spot between reducing risks and nurturing the right to freedom of speech. The planned stress tests for next month seem encouraging; they should give both the regulator and the regulated providers a chance to better understand each other, be on the same page, and work together for a safer online community.




Thank you for reading Kevin Unscrambles, don't forget to subscribe and get full access.