AI Can’t Benefit All of Humanity
AI is not the problem. We are. The system is
--
Bill Gates, Microsoft co-founder, respected for the philanthropic work he’s done since he stepped down from his position as CEO, recently published a must-read blog post on the present and future of AI entitled “The Age of AI has begun.”
Gates, who can hardly be accused of being a techno-pessimist or anti-technology — much less anticapitalist — concluded with a set of principles that “should guide” the public conversation on AI. Here’s the second one:
“[M]arket forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity.”
Gates advocating for regulations on AI is the help we didn’t know we needed. And his attack on the flaws of the laws of the free market is a big blow to OpenAI’s grand purpose: AI can’t benefit all of humanity. Let’s see why.
This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between AI, algorithms, and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.
AI to benefit all of humanity?
As we know very well because OpenAI’s PR department has ensured we do, the company’s ultimate purpose — and the reason they’ve built ChatGPT and GPT-4 — is to create AGI to “benefit all of humanity.” Despite my critical stance against them, I believe they’re honest about this. They want to fulfill this promise.
Yet their definitions of “benefit” and “all of humanity” don’t necessarily match yours or mine. There’s some hidden irony here: If not all of humanity agrees with OpenAI’s definition of “all of humanity” shouldn’t they change it? What they call AI alignment is only alignment with their values — not “human values” (whatever that is).