Microsoft, Adobe rally behind new California AI law

With the US all set to hold presidential elections later this year, analysts have warned that artificial intelligence-generated content could dupe and mislead American voters. Former US President Donald Trump had recently shared fake AI-generated photos of popular musician Taylor Swift and her fans in support of his presidential campaign on the social media platform Truth Social.

With the rise of such AI-generated deepfakes, state lawmakers in California, US, have introduced more than 65 new bills that touch on AI regulation. Most of those will not see the light of day, but one piece of legislation gaining steam and being backed by the big AI companies is a new bill called AB 3211.

What does this new bill propose?
AB 3211, also known as the ‘California Digital Provenance Standards Bill’, would have required tech companies to add watermarks into the metadata of images and videos created using artificial intelligence. Metadata conveys vital information, including origin, context, and history of a piece of text or image

The Legislature should condition online platforms to label synthetic content produced by GenAI. With these actions, the Legislature can help in the assurance that Californians stay safe and informed,” the bill reads.

The bill ensures that the provenance data incorporated into AI-generated content, at a minimum, will identify the synthetic nature of that content, the name of the generative AI provider, the time and date the provenance data was added, and which parts of the content are AI-generated.

Also, the bill asks tech companies to create tools that let users appraise if an image or video is AI-generated, alongside the display of the unique provenance data attached to the content. These are to undergo testing to make sure they can’t be used for nefarious purposes by trying to attach fake provenance data on AI-generated content.

One of the main difficulties in detecting and identifying AI-generated content is that photo and video metadata can be shed. The bill proposes to cure that deficiency by banning software applications or tools that are primarily designed to remove provenance data from synthetic content.

Large online platforms, such as Instagram or X, to label AI-generated content in an “easy-to-understand” format for users. In a similar vein, sound recordings and music videos shared on the platforms should be labelled with the name of the artist, the track and the copyright holder or other licensor information, according to the bill.

Who has supported the bill and why?
OpenAI, the company behind ChatGPT, has reportedly also backed the draft legislation. “New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content,” Jason Kwon, chief strategy officer of OpenAI, was quoted as saying by Reuters.

According to a report in TechCrunch, the other two big companies to throw their weight behind this bill on AI are Adobe and Microsoft.

Interestingly, AB 3211 was opposed initially by an industry association that counted both Adobe and Microsoft as its members. The amendments bring down the fines for violations: $100,000 for an intentional violation and $25,000 for an unintentional violation.

In fact, all three of the above tech companies are part of the Coalition for Content Provenance and Authenticity, or C2PA initiative, to create an industry-wide standard for marking AI-generated content.

Post a Comment