As the fight against deepfakes heats up, one firm helps us fight back. Hugging Face, an organization that hosts AI initiatives and machine studying instruments has developed a spread of “state of the art technology” to fight “the rise of AI-generated ‘fake’ human content” like deepfakes and voice scams.
This vary of expertise features a assortment of instruments labeled ‘Provenance, Watermarking and Deepfake Detection.’ There are instruments that not solely detect deepfakes but additionally assist by embedding watermarks in audio recordsdata, LLMs, and pictures.
Introducing Hugging Face
Margaret Mitchell, researcher and chief ethics scientist at Hugging Face, introduced the instruments in a prolonged Twitter thread, the place she broke down how every of those completely different instruments works. The audio watermarking tool, as an illustration, works by embedding an “imperceptible signal that can be used to identify synthetic voices as fake,” whereas the picture “poisoning” tool works by “disrupt[ing] the ability to create facial recognition models.”
Furthermore, the picture “guarding” tool, Photoguard, works by making a picture “immune” to direct modifying by generative fashions. There are additionally instruments like Fawkes, which work by limiting the usage of facial recognition software program on footage which might be accessible publicly, and quite a few embedding instruments that work by embedding watermarks that may be detected by particular software program. Such embedding instruments embrace Imatag, WaveMark, and Truepic.
With the rise of AI-generated “faux” human content material–”deepfake” imagery, voice cloning scams & chatbot babble plagiarism–these of us engaged on social affect @huggingface put collectively a group of a number of the state-of-the-art expertise that may assist:https://t.co/nFS7GW8dtk
— MMitchell (@mmitchell_ai) February 12, 2024
While these instruments are definitely a superb begin, Mashable tech reporter Cecily Mauran warned there may be some limitations. “Adding watermarks to media created by generative AI is becoming critical for the protection of creative works and the identification of misleading information, but it’s not foolproof,” she explains in an article for the outlet. “Watermarks embedded within metadata are often automatically removed when uploaded to third-party sites like social media, and nefarious users can find workarounds by taking a screenshot of a watermarked image.”
“Nonetheless,” she provides, “free and available tools like the ones Hugging Face shared are way better than nothing.”
Featured Image: Photo by Vishnu Mohanan on Unsplash