Human history has been characterized by an accelerating rate of technological progress. It is caused by a positive feedback loop. A new technology, such as agriculture, allows an increase in population. A larger population has more brains at work, so the next technology is developed or discovered more quickly. In more recent times, larger numbers of people are liberated from peasant-level agriculture into professions that entail more education. So not only are there more brains to think, but those brains have more knowledge to work with, and more time to spend on coming up with new ideas.
We are still in the transition from mostly peasant-level agriculture (most of the world's population is in developing countries), but the fraction of the world considered "developed" is constantly expanding. So we expect the rate of technological progress to continue to accelerate because there are more and more scientists and engineers at work.
Assume that there are fundamental limits to how far technology can progress. These limits are set by physical constants such as the speed of light and Planck's constant. Then we would expect that the rate of progress in technology will slow down as these limits are approached. From this we can deduce that there will be some time (probably in the future) at which technological progress will be at it's most rapid. This is a singular event in the sense that it happens once in human history, hence the name singularity.
Vernor Vinge (famous science fiction author), in his series of stories 'The Peace War' and 'Marooned in Real Time' had a different definition. He implicitly assumed that there was no limit to how far technology could progress, or that the limit was very very high. The pace of progress became very rapid, and then at some point mankind simply disappeared in some mysterious way. It is implied that they ascended to the next level of existence or something. From the point of view of the 20th century, mankind had become in-comprehensively different. So that time horizon when we can no longer say anything useful about the future is Vinge's Singularity. One would expect that his version of the Singularity would recede in time as time goes by, i.e. the horizon moves with us.
When will the Singularity Occur?
The short answer is that the near edge of the Singularity is due about the year 2045 AD. Several lines of reasoning point to this date. One is simple projection from human population trends. Human population over the past 10,000 years has been following a hyperbolic growth trend. Since about 1600 AD the trend has been very steadily accelerating with the asymptote located in the year 2045 AD. Now, either the human population really will become infinite at that time, or a trend that has persisted over all of human history will be broken. Either way it is a pretty “special” time.
If population growth slows down and the population levels off, then we would expect the rate of progress to level off, then slow down as we approach physical limits built into the universe. There's just one problem with this naive expectation - it's the thing you are probably staring at right now - the computer.
Computers aren't terribly smart right now, but that's because the human brain has about a million times the raw power of todays' computers.
Since computer capacity doubles every two years or so, we expect that in about 33 years, the computers will be as powerful as human brains. And two years after that, they will be twice as powerful, etc. And computer production is not limited by the rate of human reproduction. So the total amount of brain-power available, counting humans plus computers, takes a rapid jump upward in 33 years or so. 33 years from now is 2045 AD.
Can the Singularity be avoided?
There are a couple of ways the Singularity might be avoided. One is if there is a hard limit to computer power that is well below the human-equivalent level. Well below means like a factor of 1000 below. If, for example, computer power were limited to only a factor of 100 short of human capacity, then you could cram 100 CPU chips in a box and get the power you wanted. And you would then concentrate on automating the chip production process to get the cost down. Current photolithography techniques seem to be good for a factor of 50 improvement over today's chips. So it seems that we need at least one major process change before the Singularity and maybe it doesn't exist.
Critiques of the Singularity
Because the singularity is such a new and speculative idea, and the subject of little academic study, there are people that take practically every imaginable position with respect to it. Some, unfamiliar and shocked by the idea, dismiss it outright or simply react with confusion. Others, such as philosopher Max More, dismiss some of the central propositions after more careful study. A substantial number embrace it openly and without too many qualifications, such as futurist Ray Kurzweil, who seems to expect a positive outcome with a very high probability.
Criticisms of the singularity generally fall into two camps: feasibility critiques and desirability critiques. The most common feasibility critiques are what commonly referred to as the Imago Dei objection and the Microsoft Windows objection. Imago Dei refers to Image of God, which is the doctrine that humans are created in God's image. If humans are really created in the image of God, then we must be sacred beings, and the idea of artificially creating a superior being becomes dubious-sounding. If such a superior being could be possible, then wouldn't God have created us that way to begin with? Unfortunately for this view, science, experimental psychology, and common sense have revealed that humans possess many intellectual shortcomings, and that some people have more of these shortcomings than others. Human intelligence isn't perfect as it is; long-term improvements may become possible with new technologies.
The Microsoft Windows objection often surfaces when the topic of super-intelligent artificial intelligence is brought up and goes something like this: "How can you be expecting super-intelligent robots in this century when programmers can't even create a decent operating system?" The simple answer is that too many cooks ruin a dish, and operating systems are plagued by a huge number of programmers without any coherent theory that they can really agree on. In other fields, such as optics, aerospace, and physics, scientists and engineers cooperate effectively on multi-million dollar projects because there are empirically supported theories that restrict many of the final product parameters. Artificial intelligence can reach the human level and beyond if it one day has such an organizing theory. At the present time, no such theory exists, though there are pieces that may fit into the puzzle.
Lastly, there are desirability critiques. If we humans build a more intelligent species, might it replace us? It certainly could, and evolutionary and human history support this possibility strongly. Eventually creating super-intelligence seems hard to avoid though. People want to be smarter, and to have smarter machines that do more work for us. Instead of trying to stave off the singularity forever, I think we ought to study it carefully and make purposeful moves in the right direction. If the first super-intelligent beings can be constructed such that they retain their empathy for humanity, and wish to preserve that empathy in any future iterations of themselves, we could benefit massively. Poverty and even disease and aging could become things of the past. There is no cosmic force that compels more powerful beings to look down upon weaker beings-rather, this is an emotion that comes from being animals built by natural selection. In the context of much of natural selection it is evolutionarily advantageous to selectively oppress weaker beings, though some humans, such as vegans, have demonstrated that genuine altruism and compassion are possible.
In contrast to Darwinian beings, super-intelligence could be engineered for empathy from the ground up. A singularity originating with enhanced human intelligences could select the most compassionate and selfless subjects for radical enhancement first. An advanced artificial intelligence could be built with a deep, stable sense of empathy and even lacking an observer-centered goal system. It would have no special desire to discard its empathy because it would lack the evolutionary programming that causes that desire to surface to begin with. The better you understand evolution and natural selection, the less likely you think it is for Darwinian dynamics to apply to super-intelligence.
We should certainly hope that benevolent or human-friendly super-intelligence is possible, or human extinction could be the result. Just look at what we're already doing to the animal kingdom. Yet, by thinking about the issues in advance, we may figure out how to tip the odds in our favor. Human-posthuman synergy and cooperation could become possible.
No comments:
Post a Comment