Will AI make all information so untrustworthy that nobody will believe anything? No. Here's the real problem.
Nearly all media can be faked, but now, with the help of AI, it can be manufactured easily. That distinction is essential. Until this year, we used to evaluate the integrity of media partly based on its complexity using the following formula:
The more complex the media,
the harder it was to manipulate (time, money, and skill),
which meant that the more complex the media and obscure the topic, the less likely it was to be fake — from conversational videos to scientific papers.
This complexity heuristic (a simple rule of thumb used in fast decision-making) doesn’t work anymore. In an AI-fueled world, everything digital can be faked.
So far, the impact of this new capability has been relatively minor since the tools are still very new. That will change quickly, ramping up during the 2024 US election. Soon after, we will routinely see deepmedia used at the;
Global level, to create wartime propaganda or to heighten the fear of crisis events.
National level, during elections, or to wage social warfare.
Personal level, to discredit, blackmail (“if you don’t send us $, we will send this video to everyone you know” sent via e-mail or text at scale), or damage targeted individuals.
However, as fast as this arrives, new methods of detecting it and blocking it will be developed and deployed (mainly if the political outcomes made possible by deepmedia go against the establishment):
Watermarks on AI media (at least for media created by the most powerful online tools). Many open-source AIs may not have this restriction.
Improvements to the human search and verification engine (the individuals and organizations that routinely verify or discredit posted media). + We will develop new heuristics to determine what is true or false.
New AI tools. For example, an AI that determines the source of the media and the number of connections it has to supportive information to build a confidence rating on its integrity (there’s already a GPT plug-in that does some of this already).
NOTE: these measures won’t help in some cases, particularly at the individual level, since we don’t have fundamental data ownership rights. Without these rights, and a data rights system/industry to enforce them on our behalf, our voices, appearance, and behaviors can be freely learned by AIs and used by anyone without our consent.
As safeguards and tools mature, we’ll increasingly find that the BIG problem with deepmedia isn’t proving it false; it’s that many people don’t care if the deepmedia they experience is accurate or not, and they will: