Twitter planning policy changes to help combat deepfakes
twitter said Monday it's going to make policy changes around how it deals with manipulated videos such as deepfakes and it's asking the public for help.
“We think that a lot of people will have an interest in this space,” said Twitter Chief Legal Officer Vijaya Gadde at the WSJ Tech Live conference in Laguna Beach, California.
Deepfakes use artificial intelligence to create videos of people doing or saying something they didn't. Social networks, including Facebook and Twitter, have been grappling with manipulated videos ahead of the 2020 elections. Earlier this year, Twitter and Facebook left up an altered video of House Speaker Nancy Pelosi that made it seem like she was slurring her words, a move that drew criticism, especially from Democrats.
The US intelligence community's 2019 Worldwide Threat Assessment also noted that deepfakes could be used to meddle in elections both in the US and in allied nations.
Gadde didn't say when Twitter will roll out these new policy changes around manipulated videos. The company is looking at what to do once deepfakes are detected, including whether to label the videos or take them down.
Twitter said in a tweet that it will start gathering public feedback in the coming weeks.
In the coming weeks, we'll announce a feedback period so that you can help us refine this policy before it goes live. Stay tuned for more!
— Twitter Safety (@TwitterSafety) October 21, 2019
Comments are closed.