Companies Will Use Generative AI. But Will They Tell You About It?

PHOTO: JONATHAN RAA/ZUMA PRESS

Google DeepMind’s new tool for identifying images generated by artificial intelligence raises questions for companies about when they should flag AI-produced content for their customers. 

The tool, known as SynthID, was released Tuesday as a beta version in partnership with Google Cloud. 

SynthID works by adding a watermark to images created by Google Cloud’s text-to-image generator, Imagen. By scanning an image for its digital watermark, the tool can assess the likelihood that an image was created by Imagen.

“We are the first cloud provider to give our customers the ability to embed a watermark directly into the image of pixels, invisible to the human eye, without impacting the image quality,” said June Yang, vice president of cloud AI and industry solutions at Google Cloud.

AI research laboratory Google DeepMind and cloud-computing provider Google Cloud are both part of Alphabet

So far, the application is just for images, but the company said it could evolve to include audio, video and text. 

Companies have been looking to generative AI for a range of customer-facing applications, including generating ad slogans, crafting news releases, producing billboard images, and helping to brainstorm new names for products and services, as well as things like planning photo shoots or summarizing customer product reviews

However, businesses are divided about when to alert their customers that they are interacting with marketing copy written by AI, for example. Some say it is critical, while others say that as long as the content is accurate, the source shouldn’t matter. 

Unlike manipulated images of politicians or celebrities on social media, it might not matter whether a fictional character used in an advertisement was created by a person or AI, for instance.

Overall, the thinking leans toward disclosure in cases where a person typically would be credited—like the photographer of a published image—or when the content is generated using a model trained on potentially copyright material. But in cases where the company’s own data is used and the public doesn’t typically expect an author or source to be cited—like an advertisement—disclosure is considered less necessary. 

“As long as it’s accurate, right? That’s what they care about,” said Chief Information Officer Brian Kirkland. “It’s the quality of the content, the accuracy of the content that matters—not as much as the source—in what we do.” 

When a company is able to train the generative model on its own dataset and ensure the content is accurate and original, it can use its own discretion about disclosing the use of generative AI, said Hatim Rahman, assistant professor of management and organizations at the Kellogg School of Management at Northwestern University. 

Companies already outsource things like copywriting without disclosing who wrote the marketing material and the public doesn’t have an expectation of disclosure, he said. Similar questions came up when photo-editing platform 

Adobe Photoshop rose to popularity, he added, and today most companies don’t disclose that an image they are using has been photoshopped. 

However, if the company is using a public model that is trained on public data, failure to disclose could create legal issues, since the model could have been trained on copyright material, although laws and regulations around this are yet to be clearly defined, Rahman said. 

Kirkland said Choice Hotels would want to curate and secure the data that the model is trained on to ensure accuracy before moving forward with any use cases. 

At software company Laserfiche, employees are prohibited from representing AI-generated content as their own and they are required to identify AI-generated material wherever it is used, said Thomas Phelps, senior vice president of corporate strategy and chief information officer. 

Disclosing the source of content could also be a way for companies to build trust and transparency with their customers, said Julia White, chief marketing and solutions officer at software company SAP

Lea Sonderegger, chief digital and information officer at crystal and jewelry maker Swarovski, said, “In the end, we absolutely owe that to our customers. We owe that to humanity that we say: Where is this coming from?”

Scott duFour, chief information officer of payments company Fleetcor, said that it will become standard to disclose the use of generative AI in contexts where it is already typical to credit a human author. 

“Things like sourcing stats and research or crediting writers and photographers in company materials is commonplace,” he said. “I don’t see why giving the same transparent attribution to AI-generated images is any different.”

Previous
Previous

Microsoft files patent for context aware AI assisted wearables

Next
Next

AI Art Copyright Ruling Invites Future Battles Over Human Inputs