Hi Elliott, good to see you here :)

Some research groups are working exactly on building physical (in contrast to digital) artificial neural networks. I'd love to see more advances in that area, I don't think computers are the best way to create AI.

About extrapolation, I'm not sure finding new tactics or ways to do things involves extrapolating necessarily. AlphaGo found new strategies in Go through generalizing from what it already knew. Using the same learning processes to learn new tactics at Chess would've been extrapolation.

But I agree the difference between generalization and extrapolation can be blurry sometimes so I don't have a good argument against that.

About embodiment, there are initiatives towards that, but for now, connectionist AI is focused on building virtual AIs. Integrating GPT-3 in a physical robot that could interact with the environment and learn from perception and action would generate very interesting results.

Finally, I agree AGI most likely won't be a copy of the human mind. For instance, we don't want to imbue machines with emotions or motivations. Otherwise, they may find reasons to realign their goals. Yet, we're the only example of intelligent life, so it's reasonable to look at our brains in search of inspiration.

Love your thoughtful comments, Elliott! :)

Writing at the intersection of AI, philosophy, and the cognitive sciences | alber.romgar@gmail.com | Get my articles for free: https://mindsoftomorrow.ck.page