Google intends to implement a new rule for ads during election periods: Any political ad served on the company’s platforms must indicate when AI-generated images or audio are being used. Information from the BBC.
The new rules are expected to come into effect from November, one year before the US presidential elections. According to a Google spokesperson in an interview with the BBC, the decision will be in response to “the prevalent growth of tools that generate artificial content”.
In this scenario, any ad that contains artificial material representing a real person or event must contain a prominent warning about the use of “artificial content”. Google recommends some disclaimers like “This image does not represent actual events” or “This video content was artificially created”.
Google’s advertising policies already include a number of restrictions on misinformation and fake news, but there was no specific approach for creations using generative AI. This applies to images generated by artificial intelligence – as was the case with the viral images of Donald Trump’s arrest – and the use of deepfakes in videos.
Google is also developing an AI image recognition tool
In addition to rules, Google is also making it easier to recognize content generated by artificial intelligence. Its subsidiary DeepMind announced SynthID technology, which is able to act as a kind of invisible watermark and signal when the image is generated by artificial intelligence.
source: BBC
“Coffee trailblazer. Social media ninja. Unapologetic web guru. Friendly music fan. Alcohol fanatic.”