1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Organizational AI for journalism: Dealing with the dilemma

How can journalism regain trust in the face of AI? DW Akademie proposes a three-tiered approach to the governance of AI in news organizations.

Picture of a smartphone screen showing an AI-generated online news page
AI-generated news websites are a threat to societal trust in journalism.Image: Kevin Mertens/DW

The media sector faces a major challenge. On the one hand, media organizations must leverage the opportunities offered by AI to stay competitive. On the other hand, studies, such as the latest Reuters Digital News Report, show that global trust in news media is low and declining. As media users are often skeptical of AI-generated content, its proliferation risks further eroding trust. Trustworthy journalism remains crucial for providing reliable information, combating hate speech, and countering disinformation. Without journalism, the public sphere would be overrun by harmful content and propaganda.

Thus, resolving this AI dilemma is key to the future of journalism and democracy. Journalism can only regain trust if media organizations use AI ethically and transparently, ensuring they continue to fulfill their critical societal role.

DW Akademie has discussed this challenge with numerous experts from the Global South. In the following we propose tackling the issue from a newsroom management perspective and to take a muti-layered, practice-oriented approach. So far, the discussions on AI standards for newsrooms have brought about important but also somewhat general contributions to the discussion, see for example the Paris Charter by RSF. There is a broad consensus among experts that media organizations need to align AI development and implementation with their normative ideals, editorial missions, and professional values. While ethical principles and normative theories describe how journalistic AI should be used responsibly, they often fail to clarify who ensures these ideals are met. Natali Helberger, co-founder of the Dutch AI, Media & Democracy Lab, and her colleagues highlighted this gap in their paper, "Towards a Normative Perspective on Journalistic AI: Embracing the Messy Reality of Normative Ideals."

The European Ethics Guidelines for Trustworthy AI illustrate this challenge: "Develop, deploy, and use AI systems in a way that adheres to the ethical principles of respect for human autonomy, prevention of harm, fairness, and explicability. Acknowledge and address the potential tensions between these principles." While this is a strong conceptual basis, Helberger argues that it lacks a clear directive for accountability, revealing a significant gap between principles and institutional decision-making.

However, while identifying values and principles for the responsible use of AI is a necessary starting point, it is not sufficient on its own.

Media organizations need to operationalize this for their daily work. They need to establish internal AI governance systems that encompass values, legal compliance, organizational structures, and responsibilities, as well as workflows and policies, for instance for procurement and partnerships, their IT system landscape, education and training and many other fields.

The first step to be taken here is a comprehensive reflection on AI use. Key questions include: What kind of AI do journalists want to use? Is it appropriate, necessary, or beneficial? What will it add to the organization's goals? The use of generative AI, in particular, raises additional legal, moral, and reliability concerns. As Abeba Birhane, founder and head of Trinity College Dublin's Artificial Intelligence Accountability Lab (AIAL) notes, "Organizations should have clear definitions of what they mean by responsible AI and accountability."

Truly responsible use of journalistic AI involves moving beyond abstract principles to concrete business and editorial decisions. These decisions must result in structures and processes grounded in organizational values, ethical guidelines, and policies. This work must be tailored to the size and scope of each organization and developed incrementally. Crucially, the perspectives of all stakeholders — especially regarding data governance, privacy, and transparency — must be included.

A three-tiered approach

We propose a three-tiered approach to AI governance:

  1. Ethical foundations: Define ethical reference points and principles as the foundation of an overarching AI strategy. Develop your strategy and guidelines.
  2. Compliance systems: Establish systems to ensure adherence to legal and other relevant norms.
  3. Operational implementation: Create and implement responsibilities, processes, and structures according to your AI strategy.

Each stage involves a series of decisions that must be implemented. Data governance is essential, and larger organizations may require risk and change management. Employees need training and clear communication, supported by robust knowledge management systems. Additionally, the process must be dynamic and circular, adapting continuously to keep pace with technological advancements and assure organizational learning.

The ultimate challenge lies in balancing the risks and opportunities of (generative) AI while navigating conflicts between organizational goals and norms such as journalistic standards, data protection, and privacy. While not all conflicts can be fully resolved, they must be thoughtfully addressed in an organization's AI strategy.

Toward holistic AI governance

Using AI responsibly demands consideration of the broader societal impact of AI systems and alignment with stakeholder values, legal standards, and ethical principles. Holistic AI governance ensures that policies and guidelines are implemented comprehensively, with a strong emphasis on the user and stakeholder perspectives.

By embedding ethical and operational rigor into AI strategies, media organizations can not only mitigate risks but also rebuild trust and strengthen their critical role in society.

Author: Julius Endert