I agree with you, but the solution can’t be to just align the models with OpenAI’s ideas, DeepMind’s values, or a group of English speakers.
If the solution to aligning AI models to human values and beliefs isn’t straightforward — because there aren’t universal morals and senses of right and wrong, and each group has their preferences — then we should work together to go forward touching as much common ground as possible.
I’d choose to not deploy these models at all rather than accepting that only the values of a few would be respected.