MediumAI: Responsible AI for Journalism

MediumAI: Responsible AI for Journalism

 

MediumAI is an associated team between Inria and CWI. The main question the team is working on is What are the potential sources of bias in natural language processing (NLP) driven applications targeted for journalism and how can we highlight them and mitigate their effect?

From recommender systems to large language models, data-driven AI tools have shown different forms of limitations and bias. Bias in AI tools may stem from multiple factors, including bias in the input data the AI tools are trained on, the algorithm and the individuals responsible for designing the AI tools, and bias in the evaluation and interpretation of AI tool output. Limitations are due to technical difficulties in achieving specific tasks. Media outlets use different algorithmic aids in their workflow: keyword extraction, entities and relations extractions, event extraction, sentiment analysis, automatic summarization, newsworthy story detection, semi-automatic production of news using text generation models, and search, among others. Given the importance of the media sector for our democracies, shortcomings in the tools they use could have severe consequences. Both Inria and CWI have partnerships with large media groups and can help them address bias and limitations in their AI workflows.

Team members:

Oana Balalau

Davide Ceolin

Chadi Helwe

 

Past seminars:

Davide Ceolin: “My Quest for Quality – Transparent Information Quality Assessment using Explainable AI Methods”

Chadi Helwe: “Evaluating the Reasoning Abilities of Language Models”

Sanne Vrijenhoek: “Normative diversity for recommender systems”

Manel Slokom: “How to diversify any personalised recommenders?”

Oana Balalau: “NLP for Journalism: Current Progress and Open Challenges