I agree Brian!

The scenario you describe is indeed plausible. However, we can always try to define and set some parameters ourselves to restrict the possibility space. For instance, as you say, instilling in AI "a desire to please us." Of course, we'd have to define that mathematically and it's not an easy feat (take for example all the AIs and machines that have gone haywire because of a bad definition of its intended behavior).

The alignment problem is unsolved and it's probably the most important problem in long-term AI - although in the short term AI can be as dangerous with way less intelligence.

--

--

AI and Technology | Analyst at CambrianAI | Weekly Newsletter: https://mindsoftomorrow.ck.page | Contact: alber.romgar@gmail.com

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alberto Romero

Alberto Romero

AI and Technology | Analyst at CambrianAI | Weekly Newsletter: https://mindsoftomorrow.ck.page | Contact: alber.romgar@gmail.com

More from Medium

NASA announces first astronaut to walk on Mars

Big Tech — The 21st Century God

No, the internet is not making us stoopider

The Future Of Robotics