2024 is going to be cuckoo-bananas when it comes to the upcoming elections. And the media has been overwhelmed by the volume and scale of political disinformation. Could tech companies, especially those working in the DAM world, help counterbalance the onslaught of deep fakes and other AI-generated misinformation heading our way?
One spark of hope is the new AI Elections Accord that over 20 tech companies - from Adobe to X - signed this week to team up to fight Deceptive AI Election Content. Based on previous efforts, we will most likely see varying forms of compliance from some of the signatories, so I’m remaining skeptical for now.
I do find the tenets promising, as they focus on both detection and removal; education and prevention. Repeating lies - even while debunking them - can amplify their impact. Instead of fact checking after the phony Biden-hugging-Martians photos are shared, it’s up to us to ensure the public is aware deep fakes are coming, what they are (this piece is good to share with your less-AI-savvy friends and family now), and teaching media literacy before we are inundated with politically-motivated AI-generated images, video, audio, and other assets.
Reading the goals, I think these tenets could continue post-election to improve the quality and transparency of a variety of assets - from news photos to proprietary content.
If you click through to the full text of the accord, you’ll read the steps the companies plan to take to reach these goals. One highlight:
Fostering cross-industry resilience to Deceptive AI Election Content by sharing best practices and exploring pathways to share best-in-class tools and/or technical signals about Deceptive AI Election Content in response to incidents.
Private companies sharing technology to respond to Deceptive AI quickly is something new to me. If you’ve ever tried to have stolen art or revenge porn or other content removed from a platform, you’ve learned how each platform’s response varies. Success on one doesn’t guarantee success on another. If they are committed to responding across platforms, I hope this continues long after the election cycles are over.
Good for all of us as voters; what about us as DAM pros? I find this step applies to issues we are already facing in DAM:
Seeking to detect the distribution of Deceptive AI election content hosted on our online distribution platforms where such content is intended for public distribution and could be mistaken as real. This might include using detection technology, ingesting open standards-based identifiers created by AI-producing companies or using content moderation services, enabling creators to disclose their use of AI when they upload content, and/or providing pathways for the public to report suspected Deceptive AI Election Content.
We need tools to detect and label AI-generated content, and having tech companies focus on building these tools and enabling reporting means that we could see similar tools added to DAM platforms as well. It wouldn’t be the first time technology developed for civic goals ended up in the general/commercial marketplace. Computers at home; wifi; the internet. It’s how we all ended up in this digital media environment in the first place, right? Maybe by using Generative AI tools to help detect and prevent sharing deceptive assets, we can limit their negative impact to our single source of truth.