Who’s to Blame for AI-Generated Harm — Users or Companies?

A simple question, hard to answer

Alberto Romero
8 min readMar 31, 2023
Midjourney

On the last day of February, NYU’s Gary Marcus published an essay entitled “The threat of automated misinformation is only getting worse.” He warned about the easiness with which you can create misinformation backed by fake references using Bing “with the right invocations.” Shawn Oakley, dubbed by Marcus as a “jailbreaking expert” said that “standard techniques” suffice to make it work, providing evidence that the threat of automatic AI-generated misinformation at scale is increasing.

Marcus shared his findings on Twitter and FoundersFund’s Mike Solana responded:

My interpretation of Solana’s sarcastic tweets is that claiming that an AI model is a dangerous tool for misinformation (or, more generally, harm of some kind) isn’t a good argument if you’ve consciously broken its filters — he implies the problem isn’t the tool’s nature but your misuse, and thus you’re to blame and not the company that created the tool. His “analogy” between Bing Chat and a text editor misses the…

--

--

Alberto Romero
Alberto Romero

Written by Alberto Romero

AI & Tech | Weekly AI Newsletter: https://thealgorithmicbridge.substack.com/ | Contact: alber.romgar at gmail dot com

Responses (4)