Amidst major elections in 50 countries worldwide this year, including the U.S. presidential election, Facebook has launched a crackdown on fake news generated by AI. The company is particularly focused on developing technology to filter out deepfake videos and manipulated images.
According to AP and The Washington Post on the 6th (local time), Meta, which operates social networking services Facebook and Instagram, is developing a technical standard to identify images, videos, and audio generated by AI.
Nick Clegg, President of Global Affairs at Meta, did not reveal a specific timeline for the implementation of this technology, but stated, “In the next few months, we will label AI-generated images from external sources and refine this process by next year.” Clegg added, “We are collaborating with the industry to develop classifiers that can help automatically detect AI-generated content, even if there is a lack of AI watermarks on the content.”
Meta has set up a system where images created by its own AI tools automatically receive a mark indicating they were AI-generated when posted on Facebook, Instagram, and Threads. Once the technology is complete, it is expected to distinguish images created by the AI of other companies such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.
The question is how well the technology to identify AI images will work. Gilli Vidan, a professor of information science at Cornell University, told at AP, “AI labeling technology will be quite effective, but it won’t catch everything.” He added, “There will be increased user confusion about what the AI label means, how accurate it is, and whether an image without an AI label can truly be trusted.”
Most Commented