As voters will head to polls this 12 months in additional than 50 international locations, specialists have raised the alarm over AI-generated political disinformation and the prospect that malicious actors will use generative AI and social media to intrude with elections. Meta has beforehand confronted criticism over its content material moderation insurance policies round previous elections—for instance, when it failed to forestall the January 6 rioters from organizing on its platforms.
Clegg defended the corporate’s efforts at stopping violent teams from organizing, however he additionally harassed the issue of maintaining. “This is a highly adversarial space. You play Whack-a-Mole, candidly. You remove one group, they rename themselves, rebrand themselves, and so on,” he stated.
Clegg argued that in contrast with 2016, the corporate is now “utterly different” in relation to moderating election content material. Since then, it has eliminated over 200 “networks of coordinated inauthentic behavior,” he stated. The firm now depends on truth checkers and AI expertise to establish undesirable teams on its platforms.
Earlier this 12 months, Meta introduced it might label AI-generated photos on Facebook, Instagram, and Threads. Meta has began including seen markers to such photos, in addition to invisible watermarks and metadata within the picture file. The watermarks can be added to photographs created utilizing Meta’s generative AI methods or ones that carry invisible industry-standard markers. The firm says its measures are consistent with greatest practices laid out by the Partnership on AI, an AI analysis nonprofit.
But at the identical time, Clegg admitted that instruments to detect AI-generated content material are nonetheless imperfect and immature. Watermarks in AI methods are not adopted industry-wide, and they’re simple to tamper with. They are additionally laborious to implement robustly in AI-generated textual content, audio, and video.
Ultimately that ought to not matter, Clegg stated, as a result of Meta’s methods ought to be capable to catch and detect mis- and disinformation no matter its origins.
“AI is a sword and a shield in this,” he stated.
Clegg additionally defended the corporate’s determination to permit adverts claiming that the 2020 US election was stolen, noting that these sorts of claims are widespread all through the world and saying it’s “not feasible” for Meta to relitigate previous elections. Just this month, eight state secretaries of state wrote a letter to Meta CEO Mark Zuckerberg arguing that the adverts might nonetheless be harmful, and that they’ve the potential to additional threaten public belief in elections and the security of particular person election staff.
You can watch the total interview with Nick Clegg and MIT Technology Review government editor Amy Nordrum under.