Thank you for your comment, Andrew!

I agree it's a big assumption. We don't know whether that's the only way to achieve AGI or not, but it's reasonable to copy as much as we can the only instance we know of AGI-level of intelligence unless there are good reasons not to.

Let's compare it to the case of planes and birds - the prototypical example for not blindly following biology. Why not do in AI the same thing we've done in other cases? We take inspiration in biology and follow its steps until we find another solution that better fits our goals.

It turns out we shouldn't be doing the same in AI because there's a crucial difference between AI and other previous technologies. In the case of planes-birds, we shifted the path because we knew a lot about the underlying physics of aerodynamics and flight mechanics. It isn't that we designed propellers and unmovable wings because we didn't know how to simulate bird-like lift and propulsion. We did it because, given the underlying physical laws, we knew we had other options. We didn't follow evolution's solution because what we knew about the world guaranteed our success to some degree.

In the case of AI, the underlying mechanisms and processes are largely unknown. We're still discovering them because neuroscience isn't much older than AI. Two of the papers I mention are from this and last year.

The reason for AI to part ways from neuroscience isn't that we know we can achieve AGI another way. It's because we don't have another option. Let me ask you something: If we knew how to recreate the anatomy/physiology of neurons in silicon, do you think we'd be doing AI the same way we're doing it?

I don't think not having another option is a good reason for the AI community to forget about ANNs' theoretical bases.

On the other hand, AI (deep learning and neural networks in particular) is achieving great results. There are a ton of useful applications that work decently for simple/narrow tasks.

Also, there are indeed other problems in AI besides the simplicity of artificial neurons. Biases, lack of interpretability, safety and lack of accountability, pollution, job losses, etc. Some of those have nothing to do with my article, but some do.

Not all problems can be reduced to the fact that artificial neurons are too simplistic, but that's a good starting point. Why not start from the very foundations if we want to solve AI's problems?

Those conversations you mention would come eventually, but I'd bet the tone would be very different.

What do you think? :)

P.S. I appreciate such good arguments and criticism of my article. I'm sure I have a lot to learn and comments like yours are the sources of growth!



AI and Technology | Analyst at CambrianAI | Weekly Newsletter: | Contact:

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alberto Romero

Alberto Romero

AI and Technology | Analyst at CambrianAI | Weekly Newsletter: | Contact:

More from Medium

Work in Progress: Idiographic and Nomothetic Science

Researching while Chinese American: Ethnic Profiling, Chinese American Scientists and a New…

Taking the science to the people: Sheeva Azma hosts Real ScientistsPlaceholder