Algorithms Already Spread Disinformation in Africa: AI Hysteria Diverts Attention, Says Odanga Madung

More than 70 countries are due to hold regional or national elections by the end of 2024. It will be a period of huge political significance across the globe, with more than 2 billion people (mostly from the global south) directly affected by the outcome of these elections. The stakes for the integrity of democracy have never been higher.

As concerns mount about the influential role of information pollution, disseminated through the vast platforms of US and Chinese corporations, in shaping these elections, a new shadow looms: how artificial intelligence – more specifically, generative AI such as OpenAI’s ChatGPT – has increasingly moved into the mainstream of technology.

The recent wave of hype around AI has seen a fair share of doom-mongering. Ironically, this hysteria has been fed by the tech industry itself. OpenAI’s founder, Sam Altman, has been touring Europe and the US, making impassioned pleas for regulation of AI while also discreetly lobbying for favourable terms under the EU’s proposed AI Act.

Altman calls generative AI a major threat to democracy, warning of an avalanche of disinformation that blurs the lines between fact and fiction. But we need some nuance in this discussion because we are missing the point: we reached that juncture a long time ago.

Tech multinationals such as TikTok, Facebook and Twitter built highly vulnerable AI systems and left them unguarded. As a result, disinformation spread via social media has become a defining feature of elections globally.

In Kenya, for example, I spent months documenting how Twitter’s trending algorithm was easily manipulated by a thriving disinformation-for-hire industry to spread propaganda and quash dissent through the platform. Similar discoveries were made by other journalists in Nigeria prior to its recent elections.

My research in Kenya also found that TikTok’s “For You” algorithm was feeding hundreds of hateful and inflammatory propaganda videos to millions of Kenyans ahead of its 2022 elections. TikTok and Twitter have also recently come under scrutiny for their role in amplifying the hate-filled backlash towards LGBTQ+ minorities in Kenya and Uganda.

Authoritarianism uses emotions to polarise people, finding fertile ground in specific events and febrile political climates. Social media platforms such as Facebook and TikTok have accelerated the spread of propaganda through microtargeting and by evading election silence windows, or blackout periods, making distribution remarkably simple.

What this means is that there is no need to rely exclusively on content generated by AI to carry out effective disinformation campaigns. The crux of the issue lies not in the content made by AI tools such as ChatGPT but in how people receive, process and comprehend the information facilitated by the AI systems of tech platforms.

For this reason, I take this sudden realisation by the tech industry with a pinch of salt. By letting Altman define what we should care about when it comes to AI, we are allowing a corporation to define the safety and risk-mitigation of this technology, instead of tried and tested institutions, such as consumer and data protection agencies.

Leave a Comment