Meta’s moderation cop-out will fuel misinformation

  • 0

It did not take long after Mark Zuckerberg announced major changes to Meta’s content moderation strategy, for the first satirical meme to land in my WhatsApp inbox: “Facebook founder and convicted paedophile Mark Zuckerberg, dead at 36, says social media sites should not fact-check posts.” The fake news report, which went on to specify that Zuckerberg purportedly “died of coronavirus and complications of syphilis”, vividly illustrated the chaotic power of social media – it is a place where ordinary users can tilt at the windmills of the powerful, but also where falsehoods can thrive if allowed to go unchecked. The post also – probably intentionally – illustrated the problem with Zuckerberg’s new approach of leaving fact-checking and content moderation in the hands of whimsical users.

Zuckerberg’s announcement outlined a radical shift in the way Meta (owners of Facebook, Instagram, Threads and WhatsApp) will approach content moderation. It plans, among other moves, to eliminate fact-checkers (for now, only in the US) and replace them with a system of “community notes”, where users can comment on or correct information posted by other users – a system similar to the one already used by X (formerly Twitter). Meta will also make it more difficult for offending content to be removed, and the company will also relax restrictions on posts to do with sensitive topics like immigration and gender.

Although Meta had set up an elaborate moderation programme and collaborated with fact-checkers around the world, it was previously criticised for not paying enough attention to moderation in languages other than English, and gagging whistleblowers like South African Daniel Motaung (who worked with Meta’s partner in Kenya) for speaking out about poor working conditions. In South Africa, Meta collaborates with two third-party organisations, Africacheck and the news service AFP, to do its fact-checking. The revenue these organisations derive from fact-checking on behalf of Meta is often a crucial contribution towards their sustainability, and if the decision to cut back on fact-checking is extended to regions outside the US, these organisations will be very negatively affected. In an open letter to Zuckerberg, the International Fact-Checking Network (IFCN) strongly criticised what they see as Zuckerberg’s false equation of fact-checking with “censorship”, reminding him that all Meta’s fact-checkers had to be vetted by the IFCN to ensure they were nonpartisan, and that they do not have the authority to remove content or accounts. Community notes, IFCN goes on to note, are not as effective as the flagging of content by fact-checkers, as these notes are more often than not based on a dominant political consensus rather than factual evidence. In highly polarised and conflictual contexts, handing responsibility for fact-checking to users could further marginalise critical or minority voices, or render them more vulnerable to attacks.

It is true that fact-checking is not a panacea for disinformation online. Information that has been verified may not reach the users who most need to see it, or even if they do, many users may not act on it. Often, the original disinformation might be more entertaining or have a stronger emotional appeal. Fact-checking may also have a “backfire effect” when users stubbornly reinforce their original beliefs. Then, there are so-called “zombie claims” – false information that refuses to go away even though it has been debunked. Because of these limitations, fact-checking has to be combined with other strategies like media literacy and regulation if counter-disinformation strategies are to be effective. Especially when it is expected of audiences to be more discerning in the absence of formal fact-checking, their ability to discern false information from accurate information will be increasingly important. But the way audiences engage with misinformation online – their motivations to correct, not correct, or share disinformation – are deeply influenced by social and cultural contexts. In African contexts, for instance, some users may be reluctant to correct their elders or community leaders out of respect, or may share disinformation because of a misplaced loyalty towards community members they think might benefit from such information. Nevertheless, correct and reliable information on social media can often mean the difference between life and death for citizens, politicians and activists, especially in the Global South, where social contracts are often fragile and governments are frequently compromised and authoritarian, as well as in conflict zones in the Middle East, where unmoderated harmful content can fuel tensions.

But Zuckerberg’s announcement was not made because of these difficulties with fact-checking or its limitations. The most telling aspect of this announcement is that Zuckerberg framed these moves in terms of a move away from “censorship” and toward “free speech”. Not only is the notion of free speech that fails to protect vulnerable users or marginalised communities highly problematic, but in Meta’s case it is also hypocritical. Human rights observers have pointed out that Meta has been inconsistent in applying its own policies when it comes to, for instance, content related to Palestine, which has reportedly been systematically suppressed. In this regard, Zuckerberg’s positioning of fact-checking as a form of censorship aligns with Elon Musk’s supposed “free-speech absolutism”, which has also been shown to be not only hypocritical, but clearly in favour of interests on the far right of the political spectrum.

But most of all, the timing of Meta’s announcement, on the eve of the return of Donald Trump to the White House, strongly suggests that Zuckerberg is currying favour with the new political powers in the US. Given the benefit that Trump himself has derived from the “disinformation machine” that helped return him to power, and his threats to Zuckerberg after Meta banned him from their platforms in the wake of the 6 January 2021 attack on the Capitol, Zuckerberg’s decision to cut down on moderation is likely to have also been informed by the new political climate in the US.

Zuckerberg’s decision may, therefore, be less about what is in the best interest of users, and more about posturing as part of a broader power play between platforms, politicians and the global movement for a more just and equitable internet. In this, his vision has been criticised for being US-centric, in that it aligns Big Tech companies like Meta and X with American political power. As researchers Tom Divon and Jonathan Ong put it, Zuckerberg’s decision “is no championing of free speech – it’s a power play, a bold tech bro flex signalling a new alliance between Silicon Valley and Washington against global tech regulation and tech justice activism”. As such, Meta’s announcement is a reminder that debates about freedom of speech and censorship are often proxies for more pragmatic, utilitarian jockeying for power, privilege and economic gain. On the African continent, the smokescreen of “fake news” is already abused to put pressure on journalists and clamp down on free speech, while African elections have been marred by disinformation campaigns. A global climate of disdain for fact-checking and moderation will certainly benefit authoritarian African leaders and unscrupulous politicians. Citizens committed to free, ethical and critical expression in the Global South should watch these developments with a high degree of concern, if not alarm.

  • Herman Wasserman is professor of journalism and director of the Centre for Information Integrity in Africa at Stellenbosch University.
  • 0

Reageer

Jou e-posadres sal nie gepubliseer word nie. Kommentaar is onderhewig aan moderering.


 

Top