Publishers like BDG, BuzzFeed and Trusted Media Brands are starting to use generative AI technology in their sales operations to reduce the amount of time it takes to pitch a new client, respond to a request for proposal (RFP) or ultimately close a deal.
But what happens when confidential information in advertiser RFPs gets uploaded by salespeople who are just trying to do their jobs a little quicker?
It’s a question that publishers are being forced to grapple with as they look to put guardrails in place when using AI tools to boost sales.
The large language models (LLMs) that the AI tools are built on are constantly feeding on the data being run through their systems. And with publishers’ product teams building new AI plug-ins for their client relationship management platforms (CRM) that help map out prospective ad campaigns or inventing AI co-pilots that streamline time-consuming tasks like RFP summarization, publishers are sending potentially sensitive information into these LLMs.
While the time saved and potential revenue increases realized by using this tech within a sales capacity is tempting not to pass up, understanding the previous data scraping behaviors and nuances of securing sensitive information has to be a top priority, said Myles Younger, head of innovation and insights at U of Digital.
BuzzFeed, for its part, has taken steps to limit the use of confidential information. The license they purchased from OpenAI – which is the technology that all of their co-pilots are built on – prevents the sales team’s more confidential information from being used to improve the LLM and subsequently be distributed within the public-facing ChatGPT, according to Bill Shouldis, BuzzFeed’s staff product manager of revenue innovation.
“We have our agreement [with OpenAI] that none of our data is used for training,” said Shouldis. But there is also a human stopgap measure of not uploading highly confidential information in the first place to further prevent any unintended consequences, he added.
When asked if prospective clients or current advertisers have to be notified that their information is running through an AI tool connected to a larger LLM, Shouldis said he wasn’t sure. Given the fact that the tools BuzzFeed uses are proprietary, the privacy settings are higher than a publisher that might be using an open access AI tool. A BuzzFeed’s spokesperson did not respond to the same question about informing clients of the use of AI in their sales operations.
BDG, on the other hand, is using Google’s AI tools to experiment with building sales-side efficiencies. The publisher chose Google over OpenAI, in part because their first- and third-party data is already being stored in Google Cloud, eliminating the friction and security risk of trying to reupload years’ worth of historical sales and campaign data into another LLM. But the quality of Google’s tech stack also feels more secure, according to CTO Tyler Love.
“It’s a Google-grade product, whereas I feel like OpenAI and ChatGPT itself are – I’m sure they’re working on very ambitious things – but they’re not really ready for throwing 10 million requests through them all at once. Where I think Google’s doing that already,” Love said.
Trusted Media Brands, which is testing a variety of different AI tools, is currently running all contracts and agreements with AI tech companies through its legal team, down to the most minute click-to-accept terms and conditions, according to Jacob Salamon, vp of business development at TMB. Though that’s specifically being done to protect the company’s IP assets, given its library of 95,000 UGC video licenses it’s acquired through Jukin Media.
Still, those licensed videos are how TMB makes a good deal of revenue, including through renting the licenses to advertisers who want to use those assets in brand campaigns. So avoiding AI companies that have more “nefarious” terms than others in their use agreements is paramount, Salamon said, though he declined to name the “nefarious” companies.
“The firewalls that publishers and agencies have to construct around their clients to make sure that their clients’ proprietary information doesn’t leak out — it’s a challenge,” said U of Digital’s Younger, who added he is not quite sure the issue can be solved just yet.