WASHINGTON – Congress should amend portions of U.S. law that allow social media companies to enjoy immunity for content posted on their platforms in light of the significant dangers posed by artificial intelligence-enabled fake videos, a panel of experts told the House Intelligence Committee at a hearing Thursday.
Social media companies should be asked to exercise reasonable moderation of content, and U.S. government agencies should educate citizens on how to tell if a video is fake and invest in technologies that will aid in such determinations, the experts said.
The hearing, led by House Intelligence Committee Chairman Adam B. Schiff, comes as lawmakers and technologists fear that Russia, China and other foreign powers are likely to scale up their attack on U.S. elections in 2020 with “deepfake” videos that will leave American voters unable to distinguish between real videos and those that are manipulated intentionally.
In 2016, a Kremlin-backed troll farm created fake social media accounts to mislead American voters, but “three years later, we are on the cusp of a technological revolution that could enable even more sinister forms of deception and disinformation by malign actors, foreign or domestic,” Schiff said in his opening remarks at the hearing.
Artificial intelligence technologies now allow video and audio of a person to be manipulated to make the person look and say things the person has never said or done. Such videos “enable malicious actors to foment chaos, division or crisis and they have the capacity to disrupt entire campaigns, including that for the presidency,” Schiff said.
Having unwittingly enabled fake accounts on their platforms in 2016, social media companies once again face scrutiny in how they handle misleading videos. Last month, Facebook faced intense criticism for a doctored video – altered using old-fashioned editing means – of Speaker Nancy Pelosi that shows her appearing to slur her words, as if she’s intoxicated. Facebook refused to take down the video and has said it would tweak its algorithm to reduce exposure for the video.
The Communications Decency Act, which exempts social media companies from being considered publishers of material that appears on their platforms, may be allowing too much leeway to the companies, Schiff said. “Should we do away with that immunity?”
Congress should amend the law “to condition the immunity on reasonable moderation practices rather than the free pass that exists today,” Danielle Citron, a law professor at the University of Maryland told the committee. The current exemption in law gives the social media companies no incentive to take down “destructive, deepfake content,” she said.
Citron said deepfake videos not only can be used by foreign and domestic perpetrators against political opponents but could be used to hurt companies, for example, by having the CEO say something derogatory just hours before a public offering, which could lead to a collapse in its stock prices.
Facebook co-founder and CEO Mark Zuckerberg also has called for regulating online platforms, and in an op-ed in The Washington Post in March, he wrote such regulations should address harmful content, election integrity, privacy and allowing users to take their data with them.