--
I agree Brian!
The scenario you describe is indeed plausible. However, we can always try to define and set some parameters ourselves to restrict the possibility space. For instance, as you say, instilling in AI "a desire to please us." Of course, we'd have to define that mathematically and it's not an easy feat (take for example all the AIs and machines that have gone haywire because of a bad definition of its intended behavior).
The alignment problem is unsolved and it's probably the most important problem in long-term AI - although in the short term AI can be as dangerous with way less intelligence.