“This work represents a big step ahead in strengthening our data benefit as we fight subtle disinformation campaigns and synthetic-media threats,” says Bustamante. Hive was chosen out of a pool of 36 firms to check its deepfake detection and attribution expertise with the DOD. The contract may allow the division to detect and counter AI deception at scale.
Defending towards deepfakes is “existential,” says Kevin Guo, Hive AI’s CEO. “This is the evolution of cyberwarfare.”
Hive’s expertise has been educated on a big quantity of content material, some AI-generated and a few not. It picks up on indicators and patterns in AI-generated content material which can be invisible to the human eye however may be detected by an AI mannequin.
“Turns out that every image generated by one of these generators has that sort of pattern in there if you know where to look for it,” says Guo. The Hive group consistently retains observe of new fashions and updates its expertise accordingly.
The instruments and methodologies developed by means of this initiative have the potential to be tailored for broader use, not solely addressing defense-specific challenges but in addition safeguarding civilian establishments towards disinformation, fraud, and deception, the DOD mentioned in a press release.
Hive’s expertise supplies state-of-the-art efficiency in detecting AI-generated content material, says Siwei Lyu, a professor of pc science and engineering on the University at Buffalo. He was not concerned in Hive’s work however has examined its detection instruments.
Ben Zhao, a professor on the University of Chicago, who has additionally independently evaluated Hive AI’s deepfake expertise, agrees however factors out that it is removed from foolproof.
“Hive is certainly better than most of the commercial entities and some of the research techniques that we tried, but we also showed that it is not at all hard to circumvent,” Zhao says. The group discovered that adversaries may tamper with photos in a approach that bypassed Hive’s detection.