Skip to content
Home » How to use generative AI in news publishing: Policies and challenges | What’s New in Publishing

How to use generative AI in news publishing: Policies and challenges | What’s New in Publishing

Whenever the hype for technology slows down a bit, it’s the best time to think deeply about that technology. 

It’s pretty clear that generative AI – and AI in general – is here to stay. Accepting this presence does not mean uncritically believing that we should endure technology or that technology is inevitable. 

Conversely, we must discuss how to rule AI; we must ask for laws and governance and argue for the need to democratise it and make it a transparent and inspectable technology – which is by far the most critical challenge we have to face, much, much more important than copyright issues.

But we must face a simple fact: AI won’t disappear like a bubble; this is not just hype; this is not just a marketing sales technique – even if we have to train ourselves to recognize what is marketing and storytelling techniques and what is not.

That’s why newsrooms should start thinking about how to use AI technologies and engage readers in an ongoing transparent conversation.

A multidisciplinary approach

I reached out to Mattia Peretti, former JournalismAI Manager at Polis, Department of Media and Communications: “I think the most useful and adequate approach,” he told me, “is a multidisciplinary approach. The guidelines are important, but they will never exhaust all the possible case studies, all the possible applications of these technologies”.

Peretti is collaborating with The Guardian to draft a policy for journalists and readers: “At the Guardian, they ask themselves, first of all: ‘What are our fundamental principles? How can we translate them into concrete examples’? But they try not to treat journalists in an immature, ‘you can do this, you can’t do this’ manner. The approach is more of the type: these are our principles, we know you can put them into practice, and if you have any doubts you can discuss them””.

“We are considering,” continues Peretti, “the editorial component, the product one, the legal one (especially with text-to-images generative AI), how to combine the editorial part with the more engineering one: among these two departments there are gaps to fill, of reciprocal and capillar knowledge.”

“I do not believe that all journalists must necessarily have advanced technical skills,” he argues, “but at least they have to understand how ChatGPT works, understand the basics, the statistical nature, of these machines. Having someone with these skills in the editorial office is certainly useful. And the ethical component is essential, but it must be part of the whole conversation: we don’t need yet another department dedicated solely to AI ethics. Ethics must be the foundation of working with these machines”.

Peretti believes there should also be more collaboration between the newsrooms because it is the type of topic that does not make sense to deal with in secret: “We need to discuss it together,” he says. 

But another former Italian journalist, Mario Tedeschini-Lalli, adds: “There’s nothing wrong with a little deontological competition.” Indeed, publishing how these technologies are used and the newsrooms’ discussions about them can make a difference in readers’ eyes. Creating and disseminating policies is a way to regain the trust of readers, and it is also a way to position yourself with your audience.

Policy recommendations

To address the challenges and opportunities presented by generative AI in journalism, I propose the following policy recommendations:

  • Transparency: Newsrooms must be transparent about using AI-generated content, ensuring readers know the technology behind the stories they consume. This can include labels on AI-generated content (even if the main recommendation is still the same: do not use these tools to create articles from scratch, ever).
  • AI ethics and governance: Media organisations should establish guidelines for AI usage, including provisions for fairness, accountability, and transparency. Moreover, they should actively participate in developing industry-wide ethical standards and best practices.
  • Data privacy: Data privacy is paramount, as AI algorithms often rely on large datasets to operate. Newsrooms must adopt robust data protection measures and respect users’ privacy.
  • Human oversight: Journalists must remain involved in the news creation process to ensure that any content is verified, accurate, unbiased, and adheres to ethical standards. Continuous training and skill development are crucial to help journalists adapt to the AI era.
  • Legal frameworks: Policymakers should work with media organisations, technology companies, and other stakeholders to develop comprehensive legal frameworks that regulate the use of AI in journalism. These frameworks should address issues such as copyright, misinformation, and liability.

Challenges ahead

Despite the potential benefits, the use of generative AI in newsrooms also presents several challenges that we need to be aware of: 

  • Bias and misinformation: AI algorithms can inadvertently perpetuate biases and create misinformation. 
  • Job displacement: The rise of AI may lead to job displacement in journalism. Retraining and reskilling programs should be implemented to help journalists adapt to the changing landscape.
  • Public trust: Using AI in journalism may erode public trust in the news. That’s another argument favouring transparency about AI usage; moreover, newsrooms should emphasise their commitment to journalistic ethics and quality reporting, focusing on a less-is-more approach.

Moreover, the newsrooms should encourage the following:

  • Cross-industry collaboration, such as academia, technology companies, and non-profit organisations, to share best practices and develop common guidelines for AI usage
  • Public debate: News organisations should promote public discourse on AI usage in journalism, engaging readers and other stakeholders in a transparent conversation about the ethical implications and potential consequences of using AI-generated content.

Policies: examples and conversation

Wired was one of the first publications to publish an article entitled “How WIRED Will Use Generative AI Tools,” which transparently explains what the editorial staff will and will not do with these tools.

These are the rules proposed by Wired’s newsroom:

  • Do not publish AI-generated text except when the AI aspect is the story’s focus.
  • Do not publish AI-edited text, as editing requires human judgment.
  • AI-generated headlines or social media posts may be used, but they require human approval.
  • AI-generated story ideas can be used, but human evaluation remains essential.
  • AI can be used as a research or analytical tool, but the newsroom should maintain the same standards as its traditional research and original reporting. 
  • Do not publish AI-generated images or videos due to legal issues and potential copyright violations.

We don’t have to agree with any single point of this list, but it is essential to discuss, share these considerations, and make them public and transparent.

In Slow News, the Italian digital magazine I direct, we are working in different directions:

  • We have created a public policy, constantly updated, which explains our ideas about generative AI and how we use it. For example, one thing that differentiates us from Wired is that we use text-to-images AI for images that illustrate abstract, pictorial, or otherwise illustrative concepts. For transparency reasons, we always indicate that an image was created with generative software. 
  • We do not use, for any reason, hyper-realistic images or videos generated by AI unless they contain precise and non-removable wording directly on the image that says, for example, “fake image AI-generated.” That’s because images travel very fast; faster than the whole article. On this topic, Peretti told me: “I think it’s the only sane approach not to use this kind of hyper-realistic images. We need to be aware of how the public consumes our content. It’s the same thing we’ve been talking about for a lifetime about the headlines. We know that most readers only read the titles, and to attract them and make them click, we still use titles that are not representative of the content, or we quote sentences – it’s a tradition in Italy – which are a portmanteau of the thought, or the sentences said. Instead, we must recognize that most readers only read the headlines, they only look at the images, so we can’t hide by saying that the article explained everything”.
  • We have prepared a shared document to discuss with the public, with other editors and colleagues, how to improve the policy. Anyone can participate in the conversation, leave a comment, ask a question, or offer their own ethical, technological, or intellectual contribution by simply using – even anonymously – this Google Doc (it’s translated into English, and it’s available for the Fix’s readers, too). 
  • We constantly study the evolution of these tools and have prepared a course – which, incidentally, also becomes an opportunity for monetization for the editorial staff – in which we talk about generative AI, how to use it, and the various updates that are necessary with the evolution of the context and these technologies.
  • We update the policy anytime we need to or think it’s time to do that.

It’s essential to have a conversation about these topics: the goal is also to stimulate a debate about finding ways to define and update policies for these tools.

Should we, as journalists, also have a media literacy role? “I really think so,” says Mattia Peretti. “It’s probably my personal bias, but I continue to believe a lot in the educational role of journalism. We still have possibilities to reach people like few others; it’s a responsibility to be aware of”.

Alberto Puliafito

This piece was originally published in The Fix and is re-published with permission.


Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!