Scholarly Paper Analysis
Future Progress in Artificial Intelligence: A Survey of Expert Opinion
Vincent C. Müller & Nick Bostrom
The Future of Humanity Institute, Department of Philosophy & Oxford Martin School
Oxford University
The Future of Humanity Institute, Department of Philosophy & Oxford Martin School
Oxford University
"Abstract: There is, in some quarters, concern about high-level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high-level machine intelligence coming up within a particular time-frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be 'bad' or 'extremely bad' for humanity"
The full paper is hosted on Nick Bostrom's website, and is definitely worth a read if you're the type to be on this blog in the first place.
First of all, this paper does appropriately warn that "...predictions on the future of AI are often not too accurate... and tend to cluster at around '25 years or so', no matter at what point in time one asks." Futurologists, especially those who place an extreme amount of personal value on making it to a specific vision of the future, especially futures that allow for circumventing mortality, do have a marked bias towards predicting the arrival of those conditions before their own passing. Probably the most famous futurologist and advocate of a technological singularity, Ray Kurzweil, has a remarkable and horrible fear of death 1, which to me reads as something psychologically similar to an equally death-anxious millenarian advocating for the rapture to occur before their own death.
I only mean to say that this is a psychological similarity, because the implications of this psychology on the ultimate validity of the predictions between someone like Kurzweil and Harold Camping do vary considerably--there's not any particularly scientific reason to believe there will ever be a Christian apocalypse, as opposed to the extremely likely scenario that computer intelligence, by whatever definition of intelligence you like, increases.
So why, given such a strong psychological bias to underestimate the amount of time before HLMI 2 is created, why should a study like this matter any more than an extensive survey of doomsday preppers asked when the bomb's going to drop?
First of all, the usage of HLMI, as opposed to AI, gets around the immense confusion surrounding the definition of the term AI, which is nice, because defining AI in many circles requires a working definition of consciousness, and it's super hard to do that to everyone's satisfaction. Behavioral metrics, such as the capability to outperform a human in a given task, are much easier to create criteria for and test to the satisfaction of a third party. A century of promises for the imminent arrival of 'AI' has turned people off of believe that AI will ever be achieved, forgetting that the reason 'AI' has not been achieved is because the behavioral goalposts, such as defeating a human in chess or Jeopardy that were supposed to herald the creation of AI have been retrospectively deemed inconsequential. Starting with a behavioral goalpost, rather than the fuzzy notion of intelligence, is necessary for a survey such as this to produce comparable results between individuals.
Secondly, HLMI isn't immortality. The existence of HLMI is an attractive concept and certainly many people in the field of machine intelligence would like to see it happen in their lifetimes and therefore might over-optimistically guess an early arrival, but I take this bias less seriously than the one to imagine your life will be unending.
Finally, extrapolation is more appropriate to predictions of machine intelligence than other forms of prognosticating. There are trends in the growth of computing power and general technological competence that do not require random, unlikely events to ultimately produce HLMI. Until we can even define AI in a way that's not so slippery, discussing when we'll achieve it is meaningless. Discussing when we'll achieve HLMI is both more philosophically sound, and as this blog tries to show, economically relevant, because figuring out the time frame within which we must adapt our economic system to changing technologies has serious implications.
1 Talked about at length in the Kurzweil documentary, Transcendent Man.
2 High Level Machine Intelligence, defined in the paper as a machine intelligence that can do most jobs at least as well as a typical person.
I only mean to say that this is a psychological similarity, because the implications of this psychology on the ultimate validity of the predictions between someone like Kurzweil and Harold Camping do vary considerably--there's not any particularly scientific reason to believe there will ever be a Christian apocalypse, as opposed to the extremely likely scenario that computer intelligence, by whatever definition of intelligence you like, increases.
So why, given such a strong psychological bias to underestimate the amount of time before HLMI 2 is created, why should a study like this matter any more than an extensive survey of doomsday preppers asked when the bomb's going to drop?
First of all, the usage of HLMI, as opposed to AI, gets around the immense confusion surrounding the definition of the term AI, which is nice, because defining AI in many circles requires a working definition of consciousness, and it's super hard to do that to everyone's satisfaction. Behavioral metrics, such as the capability to outperform a human in a given task, are much easier to create criteria for and test to the satisfaction of a third party. A century of promises for the imminent arrival of 'AI' has turned people off of believe that AI will ever be achieved, forgetting that the reason 'AI' has not been achieved is because the behavioral goalposts, such as defeating a human in chess or Jeopardy that were supposed to herald the creation of AI have been retrospectively deemed inconsequential. Starting with a behavioral goalpost, rather than the fuzzy notion of intelligence, is necessary for a survey such as this to produce comparable results between individuals.
Secondly, HLMI isn't immortality. The existence of HLMI is an attractive concept and certainly many people in the field of machine intelligence would like to see it happen in their lifetimes and therefore might over-optimistically guess an early arrival, but I take this bias less seriously than the one to imagine your life will be unending.
Finally, extrapolation is more appropriate to predictions of machine intelligence than other forms of prognosticating. There are trends in the growth of computing power and general technological competence that do not require random, unlikely events to ultimately produce HLMI. Until we can even define AI in a way that's not so slippery, discussing when we'll achieve it is meaningless. Discussing when we'll achieve HLMI is both more philosophically sound, and as this blog tries to show, economically relevant, because figuring out the time frame within which we must adapt our economic system to changing technologies has serious implications.
1 Talked about at length in the Kurzweil documentary, Transcendent Man.
2 High Level Machine Intelligence, defined in the paper as a machine intelligence that can do most jobs at least as well as a typical person.