AI + Automation Lab

Ethics of Artificial Intelligence Our AI Ethics Guidelines

No matter what technology we use, it is never an end in itself. Rather, it must help us deliver on a higher purpose: to make good journalism. This purpose-driven use of technology guides our use of artificial intelligence and all other forms of intelligent automation. We want to help shape the constructive collaboration of human and machine intelligence and deploy it towards the goal of improving our journalism.

Author: Jonas Bedford-Strohm, Uli Köppen, Cécile Schneider

Published at: 30-11-2020

Frauengesicht mit Lichtprojektion | Picture: BR

Therefore, we ask ourselves every time before employing new technology: Does this really offer a tangible benefit to our users and employees at BR?

The work of our journalists is and will be irreplaceable. Working with new technologies will augment their invaluable contribution and introduce new activities and roles to the newsroom.

To answer the question of benefit time and again on solid footing, we gave ourselves ten core guidelines for our day-to-day use of AI and automation.

Who is Bavarian Broadcasting?

Bayerischer Rundfunk, BR for short, is Bavaria's public broadcasting service with around eight million viewers and listeners tuning in every day throughout Germany. Learn more about us here. Within our organization, the BR AI + Automation Lab is driving and exploring the use of AI technologies from a product and editorial perspective.

1. User Benefit

We deploy AI to help us use the resources that our users entrust us with more responsibly by making our work more efficient. We also use AI to generate new content, develop new methods for investigative journalism and make our products more attractive to our users.

2. Transparency & Discourse

We participate in the debate on the societal impact of algorithms by providing information on emerging trends, investigating algorithms, explaining how technologies work and strengthening an open debate on the future role of public service media in a data society.

We study the AI ethics discourse in other institutions, organizations and companies in order to check and improve our guidelines to avoid a gap between theory and practice.

We make plain for our users what technologies we use, what data we process and which editorial teams or partners are responsible for it. When we encounter ethical challenges in our research and development, we make them a topic in order to raise awareness for such problems and make our own learning process transparent.

3. Diversity & Regional Focus

We embark on new projects conscious of the societal diversity of Bavaria and Germany. For instance, we strive towards dialect models in speech to text applications and bias-free training data (algorithmic accountability).

We work with Bavarian startups and universities to make use of the AI competence in the region and support the community through use cases in the media industry and academia. We do strive for the utmost reliability in our operations and might chose to work with established tech companies on a case-by-case basis. Where possible, we work within our networks of ARD and the European Broadcasting Union (EBU), and consciously bring the ethical aspects of any proposed application to the collaboration.

4. Conscious Data Culture

We require solid information about their data sources from our vendors: What data was used to train the model? Correspondingly, we strive for integrity and quality of training data in all in-house development, especially to prevent algorithmic bias in the data and render visible the diversity of society.

We continually raise awareness amongst our employees for the value of data and the importance of well-kept metadata. For only reliable data can produce reliable AI applications. A conscious data culture is vital to our day-to-day work and an important leadership task to future-proof public service media.

We collect as little data as possible (data avoidance) and as much data as necessary (data economy) to fulfill our democratic mandate. We continue to uphold high data security standards and raise awareness for the responsible storage, processing and deletion of data, especially when it concerns personal data. We design the user experience of our media services with data sovereignty for the user in mind.

5. Responsible Personalization

Personalization can strengthen the information and entertainment value of our media services, so long as it does not undermine societal diversity and prevents unintended filter bubble effects. Hence, we use data-driven analytics as assistive tools for editorial decision-making. And in order to develop public service minded recommendation engines, we actively collaborate with other European media services through EBU.

6. Editorial Control

While the prevalence of data and automation introduce new forms of journalism, the editorial responsibility remains with the editorial units. The principle of editorial checks continues to be mandatory, even with automated content. But its implementation changes: the check of every individual piece of content is replaced by a plausibility check of causal structures in the data and a rigorous integrity examination of the data source.

7. Agile Learning

To continuously improve products and guidelines, we need experience and learning from pilot projects and prototypes. Experiments are an explicit part of this process. Up until and including the beta phase, these guidelines offer general orientation. In the final release candidate phase, they are fully binding. That way, we ensure that our final product offering fulfills the highest standards, while still encouraging a culture of learning and experimentation in our day-to-day work. We also pledge to listen to our users, invite their feedback and adjust our services if necessary. 

8. Partnerships

We offer a practical research context for students and faculty at universities and collaborate with academia and industry to run experiments, for example with machine learning models and text generation. We exchange ideas with research institutions and ethics experts.

9. Talent & Skill Acquisition

Given the dynamic technology spectrum in the field of AI, we proactively ensure that BR has sufficient employees with the skills to implement AI technologies at the cutting edge of the industry in a responsible, human-centric way.

We aim to recruit talent of diverse backgrounds with practical AI skills which we encourage them to deploy towards public service journalism.

10. Interdisciplinary Reflection

Instead of running ethics reviews after significant resources are invested, we integrate the interdisciplinary reflection with journalists, developers and management from the beginning of the development pipeline. That way, we ensure that no resources are wasted on projects that predictably do not meet these guidelines.

We reflect on ethical red flags in our use of AI technologies regularly and in interdisciplinary fashion. We evaluate these experiences in light of the German public service media mandate and these ethics guidelines.