Yudkowsky Disagreement With Kurzweil
In a recent interview,
Eliezer Yudkowsky was asked How does your vision of the Singularity differ from that of Ray Kurzweil?
His response, with some comments:
- I don’t think you can time AI with Moore’s Law. AI is a software problem.
- I don’t think that humans and machines “merging” is a likely source for the first superhuman intelligences.
It took a century after the first cars before we could even begin to put a robotic exoskeleton on a horse, and a real car would still be faster than that.- I don’t expect the first strong AIs to be based on algorithms discovered by way of neuroscience any more than the first airplanes looked like birds.
I think neuroscience can help us as a filter for what theories are plausible or not, as well as general inspiration.
- I don’t think that nano-info-bio “convergence” is probable, inevitable, well-defined, or desirable.
- I think the changes between 1930 and 1970 were bigger than the changes between 1970 and 2010.
I find this is debatable. Because we where alive during this time, we had time to digest and become used to the changes.
- I buy that productivity is currently stagnating in developed countries.
- I think extrapolating a Moore’s Law graph of technological progress past the point where you say it predicts smarter-than-human AI is just plain weird. Smarter-than-human AI breaks your graphs.
I never thought about that, but in hindsight it is obvious.
Also see this Andrew McAfee Ted talk, where
he makes the point (around 07:50), that the major advancements in history are the ones that bend the curves a lot. I
think that when Eliezer means the same thing when he says Smarter-than-human AI breaks your graphs
.
- Some analysts, such as Illka Tuomi, claim that Moore’s Law broke down in the ’00s. I don’t particularly disbelieve this.
- The only key technological threshold I care about is the one where AI, which is to say AI software, becomes capable of strong self-improvement. We have no graph of progress toward this threshold and no idea where it lies (except that it should not be high above the human level because humans can do computer science), so it can’t be timed by a graph, nor known to be near, nor known to be far. (Ignorance implies a wide credibility interval, not being certain that something is far away.)
- I think outcomes are not good by default - I think outcomes can be made good, but this will require hard work that key actors may not have immediate incentives to do. Telling people that we’re on a default trajectory to great and wonderful times is false.
We should educate more people about this. This is what Elon Must and Stephen Hawking warned about, and is it a very frequently misinterpreted message.
- I think that the “Singularity” has become a suitcase word with too many mutually incompatible meanings and details packed into it, and I’ve stopped using it.
Overall, very spot on.