Interesting thoughts! Let's see:
1. I have to say my knowledge of ethics and morality is probably insufficient to debate this issue, but I'll try!
Even if I agree that there isn't a universal morality framework, I think we could find some universal commonalities to all human beings (and even animals).
The fact that we haven't found a roof for what's morally acceptable or not, doesn't mean there can't be some aspects of morality that we all can agree upon.
In fact, we all can probably agree that killing another person is bad in itself (regardless of the reasons that could justify the act). We can start from the ground up with AI.
2. AGI is indeed poorly defined. How can we define AGI if we don't even have a consensual definition of intelligence? And what does it mean "general?" We humans can't do everything; have we general intelligence?
The problem with AGI is that we think so highly of ourselves that we consider that human intelligence is basically AGI-level. I think we'd be better off if we called AGI "human-like intelligence" instead.
Still, however we define the term, an AGI is at least as intelligent as we are. It may lack consciousness, emotions, and other very human features. But regarding intelligence, by definition, it's at the very least as capable as us.
There may be people that consider a dog's intelligence AGI-level, but that's another question. I disagree with that.
3. Those are the big questions! To be honest, I'd rather look for their answers inside us, either scientifically through the cognitive sciences, or spiritually through introspection. I wouldn't bet on AGI to give us those answers.
Even if AGI could solve each and every problem we face - which is extremely unlikely - I'd still argue it wouldn't be worth it if we lose too much in the process. In the extreme case, we'd have nothing to solve because we wouldn't be here anymore.
But, even in milder cases, progress for progress hardly justifies any additional suffering to that inherent to the human experience of life.
What do you think? :)