Gary Marcus (and clearing misconceptions)
There was this post on the top of reddit singularity: Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence?
Due to its clickbaity nature, I couldn’t resist, and landed on the page untitled: “Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence? A Conversation With Gary Marcus”
I remember that he was doing some “unconventional” AI stuff, and formed the prejudice (I don’t event remember on what basis) that I don’t agree much with his ideas.
So I said to myself: OK let’s read the thing to see how wrong he is.
It turns out, not that wrong. So a quick lesson from that: whenever you hold an important opinion that hasn’t much basis, it’s better to double checking why you disagree in the first place. A good way to know is the opinion is well formed is whether you are able to explain/justify it. A good way to motivate yourself to do a round of re-checking is telling yourself: Let’s just see how wrong the other guy is
In the end the only thing I disagree with is the following:
The reason there’s excitement now is basically the confluence, some people say of three things, but it’s really two. I’ve heard people say it’s the confluence of huge computers, big data, and new algorithms. But there aren’t really new algorithms. The algorithms that people are using now have been around since the ’80s and they’re just variations on the one that’s in the ’50s in some ways. But there is big data and huge machines, so now it’s profitable to use algorithms that aren’t human intelligence but are able to do this brute force data processing.
Maybe the new algorithms haven’t been key in making deep learning successful, but we had plenty of new ideas in the recent years. Here are the pages of just a handful or researchers associated with AI/deep learning: Hinton, Tenenbaum, Schmidhuber, Salakhutdiinov, Sutskever, Deep ming, Andrew ng.
Most of everything else is pretty spot-on, in particular this:
A lot of early AI was concerned with that, with building systems that could model the things that are out there in the world, and then act according to those models. The new systems don’t do that; they memorize a lot of parameters, but they don’t have a clean account of the objects that are out there, the people that are out there. They don’t understand intuitive psychology and how individual human beings interact with one another.
This probably has been the reason I have been so interested in the work of Josh Tenenbaum in the first place. He even takls about it on his home page, with the very same wording:
Current research in our group explores the computational basis of many aspects of human cognition: learning concepts, judging similarity, inferring causal connections, forming perceptual representations, learning word meanings and syntactic principles in natural language, noticing coincidences and predicting the future, inferring the mental states of other people, and constructing intuitive theories of core domains, such as intuitive physics, psychology, biology, or social structure.
“Constructing intuitive theories of core domains”. I’ll come back to that.