Thank you for responding, Adam!
I emphasized the word "any" to remind us that technology development doesn't occur isolatedly. Every aspect of life is intertwined and so, at each step of progress, everything else is affected and affects back the pace of development.
We can reflect on this by thinking about what could happen when we're already on the verge of building AGI, or about what could happen at each step of the quest.
First, the roads that lead to AGI have intermediate steps that will be exploited to the last drop. Some consequences of this are job losses, ethical issues, climatic damage, etc. These are very real problems that are reinforced by increasingly intelligent AIs.
Second, if we assume we could get arbitrarily close to achieving AGI without it getting out of our control, we may never get there after all because wars and other conflicts could arise at slightly lower levels of AI.
And how can anyone successfully develop AGI? Assuming it's physically feasible, the enterprise would need of huge amount of resources and money.
If to achieve AGI we ask others for money, eventually, the tech may not be for the betterment of humanity, but to meet the interests of a few powerful corporations and governments. Those that lent the money in the first place.
Well, actually I think that whether companies ask for money or not is largely irrelevant. A breakthrough like AGI wouldn't ever be allowed to serve to better humanity — at least with the good intention OpenAI says it. There are other, much more specific interests in play that are put first most of the time. We don't need to look very far to realize this fact.
Being more pessimistic, I'd even go as far as to say that we shouldn't try to develop AGI at all — although I also think there's no way to stop scientific progress. Not because I think it could be a threat to humanity, which is another debate, but because I think we'll use it in a way that it'll harm many of us in unconceivable ways.
It's a complex topic (and a long response). I'd love to hear your thoughts on this, Adam.