|
Ahead of the U.S. elections in November next year, Meta, the parent company of Facebook, announced on the 8th (local time) that it will require global advertisers to disclose if they use other companies’ artificial intelligence (AI) technology for political and other advertisements.
Advertisers can use third-party AI tools like Dally, an image generator, in Meta’s advertisements, but they must disclose their use.
In addition, it decided not to support its AI technology when creating advertisements related to politics, social issues, housing, employment, pharmaceuticals, and financial services.
Meta stated, “We believe we can build the correct safeguards when using generative AI for advertisements related to potentially sensitive topics.”
Accordingly, from next year, those who want to advertise politically on Meta’s social media platform must disclose whether they use AI tools, and if not, the advertisement may be rejected.
Furthermore, if the advertisement is approved, it will be marked as created using AI tools.
Meta added that those who repeatedly fail to disclose their use of AI could face disadvantages, although it did not specify what these disadvantages might be.
This policy by Meta appears to respond to the controversies arising from political advertisements on its SNS platforms, including Facebook, in the past.
Facebook has previously faced criticism for “not properly managing political advertisements.” Notably, in 2016, Facebook was criticized for effectively ignoring Russia’s interference in the U.S. election through advertisements. As the controversy grew, Meta did not accept political advertisements for four months before the 2020 election.
By. Ahn Yoo Jin
Most Commented