Member-only story

AI Has an Invisible Misinformation Problem

I highly recommend reading until the end.

Alberto Romero
9 min readJun 17, 2022
Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0

The following is a selection from The Algorithmic Bridge, an educational newsletter about the AI that matters to your life. Its purpose is to help you understand the impact AI has in our world and develop the tools to better navigate the future.

Large language models have a lot of challenges ahead.

They’re costly and contaminating. They need huge amounts of data and compute to learn. They tend to engage in stereotypes and biases that could potentially harm already discriminated minorities. The larger they are, the harder it is to make them safe and the people behind them accountable. In some cases, the only reason companies build them is simply to signal they’re too in the game. Companies tend to overemphasize merits and dismiss deficiencies. To just name a few.

But there’s one other problem that, although we’re aware of, can be extremely hard to detect: AI-based misinformation.

--

--

Alberto Romero
Alberto Romero

Written by Alberto Romero

AI & Tech | Weekly AI Newsletter: https://thealgorithmicbridge.substack.com/ | Contact: alber.romgar at gmail dot com

Responses (7)