Why No One Wants AI Ethics Anymore
And what they should do to recover the credibility they’ve lost
--
I respect and admire researchers and thinkers who are devoted to assessing and understanding the sociopolitical aspects of AI. In particular, those who belong to the group now called “AI ethics.” You may feel an instinctive dislike toward that label; I don’t blame you. And if you do, you will probably identify with what I have to say today. I’m not here to criticize their goals, though — that’s precisely what I respect and admire about them — so let me first honor the singular value I think they bring to the AI community, without which it’d be a much worse place.
AI ethics is a net good for the world
Anyone who wants AI to improve the world should, if not admire, at least respect AI ethicists’ work — even if only in intent. Not just because they’re pretty much the only ones challenging big tech companies to prevent them from making the same mistakes with generative AI they made with social media; or because they’re doing so while resisting powerful opposing forces that relentlessly try to stop them. No, if they should be universally respected is because of an apolitical reason: the goals they chase are directed toward improving the world’s collective well-being. There’s hardly a more noble purpose.
You may not agree that the short-term AI risks they aim to tackle are the most urgent. In the end, discrimination, bias, and underpaid labor tend to affect minorities disproportionately. If you’re not part of them — and, by definition, you most likely aren’t — it’s hard to see how this can top longer-term risks that may affect everyone. The latter are, strictly speaking, more “existentially serious.”
(With “may affect everyone” I’m not referring to the fear that AI could kill humanity one day but to more realistic threats like unbounded misinformation or a transversal attack on the creative workforce. I acknowledge that generative AI is merely a new vector on these issues — which not only preceded it but are “political rather than technological,” in the words of author Stephen Marche — but it could worsen them in ways we can’t prepare for).