An initiative of

imedd

REGISTER

Agenda

KEYNOTE
15 min
17:45 - 18:00 EET
27 Sep.
Main Hall
How do AI models understand truth?

As they increasingly replace ranked blue links as the first line of response when people search for information online, it is critical that we question how generative AI models determine what is true when they produce answers. While much focus has been placed on the economic impact this changes in search mean for publishers, less attention has been paid to the risks posed to users by incorrect or manipulative AI-generated content, especially on sensitive topics like health. AI-endorsed falsehoods can stem from opaque training processes and design choices.

Do models favor majority views? Are certain sources prioritized? How is uncertainty communicated? Do these decisions affect the visibility of minority or alternative perspectives?

As high quality information including scientific journals and Media content are increasingly paywalled and unavailable for scrapping by the major AI companies (justifiably so) disinformation actors actively target large language models with abundant and optimized training material on the open web.

The keynote will focus on the solutions available to address this critical problem through ethical design, quality training data, and ongoing human oversight.

Speakers