Thanks for this critical but polite comment, Bob.
I agree with you, I have sociopolitical biases. This article isn't intended to defend BLOOM from a technological progress perspective. This is an introductory article to an initiative that I consider especially important given the typical motivations in the AI industry. Now, in what sense do you think what I wrote here has "pernicious effects"?
By the time I published this article, the model had just reached a 100% in the training process. It takes time to go from there to something you can use to compare it technically to other models. That's interesting in and of itself and I'll probably write a second piece on that. That's not the thesis here. This is a different article that argues different things and could be criticized in different terms.
Of course, BigScience's goals aren't just saying "hey we have an ethics-focused model." They want it to work reasonably well (I don' think it'll be close to SOTA, as it's similar to GPT-3) so further research and insights can be extracted from it. I don't think not knowing BLOOM's technical value yet makes this article less worthy. If BLOOM happens to have the same problems GPT-3 has (not technically, but socially and ethically), then I'd write an article saying exactly that: "This initiative hasn't achieved the goals it pursued."
Finally, "get them to find a way to lower the technological barriers for people who are Coding Impaired" is a great goal. They did other great things with this initiative that aren't less valuable just because they didn't do all the great things that should be done.
I wonder if you agree with BigScience's values and principles and whether you consider big tech companies should do the same, that's the question this article asks the reader.