Why the Pentagon is Jumping on Deepfake Detection
Undoubtedly, U.S. adversaries are using technology for fraud, disinformation, and malicious activities. The Pentagon’s Defense Innovation Unit is rapidly adopting commercially made ways to attribute and detect fabricated or manipulated multimedia and online content.
Deepfake technology involves machine learning and AI to “face swap” people or create audio and visual media portraying people who seem to say and do things they did not.
This technology is increasingly complex to detect and is being used to fake pornographic videos targeting celebrities, in political campaigns to disinform, and in other malignant ways. A recent example shows Volodymyr Zelensky telling his military to surrender in the war against Russia.
“This technology is increasingly common and credible, posing a significant threat to the Department of Defense, especially as U.S. adversaries use deepfakes for deception, fraud, disinformation, and other malicious activities,” DIU officials wrote.
The Pentagon’s innovation hub will rapidly field deepfake detection and attribution solutions and ask entities that meet its criteria and “desired solution attributes” to apply for collaboration and possible investment. Deepfake detection, analysis, and identification are top priorities. All projects submitted (submissions closed June 28th) were required to comply with DIU’s Responsible AI Guidelines and align with an open systems architecture approach.
The DOD wants to buy rather than make its own solutions. The hub works with DOD partners such as DARPA to prototype and integrate solutions from cutting-edge industry solution providers.