Individuals are getting more and more powerful. With the current rate of progress we're seeing in biotechnology, nanotechnology, artificial intelligence, and other technologies, it seems likely that individuals will one day -- and one day relatively soon -- possess powers once thought available only to nation-states, superheroes, or gods. This sounds dramatic, but we're already partway there.
Futurists use the term "Singularity" to describe the point at which technological change has become so great that it's hard for people to predict what would come next. It was coined by computer scientist and science fiction writer Vernor Vinge, who wrote that the acceleration of technological progress over the past century has itself taken place at an accelerating rate, leading him to predict greater-than-human intelligence in the next thirty years, and developments over the next century that many would have expected to take millennia or longer. He concluded: "I think it's fair to call this event a singularity .... It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it." [1]
A lot more people see it coming now -- in fact, a lot more people see it coming, and are writing about it now, than in 1993 when Vinge wrote these words.
WE'RE ALL SUPERMEN NOW
One question is just how much, using technologies like nanotechnology and genetic engineering, we should improve on the human condition. My own feeling is "a lot" -- it seems to me that there's plenty of room for improvement -- but others may feel differently. If we choose to improve, will we become superheroes or something like them?
Should we?
My six-year-old nephew, Christopher, wants to be a superhero. It was Superman for a while, then Spiderman. (Short-lived enthusiasm for the Incredible Hulk didn't survive the lameness of the film, apparently.)
And really, who wouldn't want to be a superhero of some sort? It's not so much the cape or the crime fighting that lies behind this sentiment. It's the way that superheroes don't have to deal with the limitations that face the rest of us. It's easy to see why kids, whose everyday limitations place them in a position that is obviously inferior to that of adults, would be so excited about super powers. But even as adults we face limitations of speed and strength and -- especially -- vulnerability to all kinds of pain, to death. The idea of being able to do better seems pretty attractive sometimes, even if we don't fantasize about being members of the Justice League any more.
Will ordinary people have better-than-human powers one day? It's starting to look possible and some people are talking about the consequences. Joel Garreau makes the superhero angle explicit in his book Radical Evolution:
Throughout the cohort of yesterday's superheroes -- -Wonder Woman, Spiderman, even The Shadow, who knows what evil lurks in the hearts of men -- one sees the outlines of technologies that today either exist, or are now in engineering .... Today, we are entering a world in which such abilities are either yesterday's news or tomorrow's headlines. What's more, the ability to create this magic is accelerating. [2]
Yes, it is. The likely consequences are substantial. Running as fast as light, a la The Flash, might be out of the question, and web slinging is unlikely to catch on regardless of technology. But other abilities, like super strength, x-ray vision, underwater breathing, and the like are not so remote. (The dating potential promised by The Elongated Man's abilities, meanwhile, may produce a market even for those second-tier superpowers.) Regardless, transcending human limitations is part of what science and medicine are about. We're already doing so, in crude fashion, with steroids, human growth hormone, and artificial knees. More sophisticated stuff, like cochlear implants, is already available, and far better is on the way.
Would I like to be smarter? Yes, and I'd be willing to do it via a chip in my brain, or a direct computer interface. (Actually, that's already prefigured a bit in ordinary life too, as things like Google and Wi-Fi give us access to a degree of knowledge that would have seemed almost spooky not long ago, but that everyone takes for granted now.) I'd certainly like to be immune to cancer, or viruses, or aging. But these ideas threaten some people who feel that Out physical and intellectual limitations are what make us human.
But which limitations, exactly? Would humanity no longer be human if AIDS ceased to exist? What about Irritable Bowel Syndrome? Was Einstein less human? If not, then why would humanity be less human if everyone were that smart? It may be true, as Dirty Harry said, that "a man's got to know his limitations." But does that mean that a man is his limitations? Some people think so, but I'm not so sure. Others think that overcoming limitations is what's central to being human. I have to say that I find that approach more persuasive.
These topics (well, probably not the Irritable Bowel Syndrome) were the subject of a conference at Yale on transhumanism and ethics. The conference was covered in a rather good article in The Village Voice, which reports that many in the pro-transhumanist community expect to encounter considerable opposition from Luddites and, judging by the works of antitechnologists like Francis Fukuyama and Bill McKibben, that's probably true. [3]
I suspect, however, that although opposition to human enhancement will produce some cushy foundation grants and book contracts, it's unlikely to carry a lot of weight in the real world. Being human is hard, and people have wanted to be better for, well, as long as there have been people. For millennia, various peddlers of the supernatural offered answers to that longing -- from spells and potions in this world, to promises of reward in the next. Soon they're going to face stiff competition from science. The success of these students of human nature suggests that the demand for human improvement is high -- probably high enough to overcome any barriers. (As Isaac Asimov once wrote, "It is a chief characteristic of the religion of science, that it works." [4])
At any rate, nothing short of a global dictatorship -- whether benevolent, as featured in some of Larry Niven's future histories, or simply tyrannical, as seems more likely -- or a global catastrophe is likely to stop the rush of technological progress. In fact, as I look around, it seems that we're living in science fiction territory already.
Take, for example, this report from the Times of London: "Scientists have created a 'miracle mouse' that can regenerate amputated limbs or badly damaged organs, making it able to recover from injuries that would kill or permanently disable normal animals." From nose to tail, the mouse is totally unique in the animal kingdom for its ability to regrow its nose and tail -- and heart, joints, toes, and more. But the revolution isn't complete with Mickey's new limbs. The more fascinating prospect is that this trait can be replicated in other mice by transplanting cells from the "miracle mouse." "The discoveries raise the prospect that humans could one day be given the ability to regenerate lost or damaged organs, opening up a new era in medicine." [5]
Limb regeneration and custom-grown organs! Bring it on! Then there are the ads I'm seeing for offshore labs offering stem cell therapy to Americans. I don't know whether this particular therapy lives up to its claims, but if it doesn't, the odds are that other places soon will be offering therapy that does (see the mouse story above).
Meanwhile, Cambridge University just held the second conference on Scientifically Engineered Negligible Senescence. At the conference, people discussed ways of slowing, halting, or even reversing the aging process. [6] There was also a conference on medical nanotechnology, [7] while elsewhere nanotechnologists reported that they had produced aggregated carbon nanorods [8] that are harder than diamond.
On a more personal note, my wife recently went to the doctor, where they downloaded the data from the implanted computer that watches her heart, ready to step in to pace her out of dangerous rhythms or shock her back into normal rhythms if things went too badly. I remember seeing something similar in a science fiction film when I was a kid, but now it's a reality. And, of course, I now get most of my news, and carry on most of my correspondence, via media that weren't in existence fifteen years ago.
THE FUTURE ISN'T THE FUTURE
I mention this because as we look at the pace of change, we tend to take change that has already happened for granted. But these stories now (except for my wife's device, which isn't even newsworthy today) are just random minor news items that I noticed over a period of a week or two, even though they would have been science-fictional not long ago. Much as we get "velocitized" in a speeding car, so we've become accustomed to a rapid pace of technological change. This change isn't just fast, but continually accelerating. The science-fictional future isn't science-fictional. Sometimes, it's not even the future any more.
Nonetheless, we'll probably see much more dramatic change in the next few decades than we've seen in the last. So argues Ray Kurzweil in his new book, The Singularity Is Near: When Humans Transcend Biology.
Kurzweil notes the exponential progress in technological improvement across a wide number of fields and predicts that we'll see artificial intelligences of fully human capability by 2029, along with equally dramatic improvements in biotechnology and nanotechnology. (In fact, these developments tend to be self-reinforcing -- better nanotechnology means better computers and better understanding of biology; better computers mean that we can do more with the data we've got, and progress more rapidly toward artificial intelligence, and so on.)
The upshot of this is that capabilities now available only to nation-states will soon be available to individuals. That's not surprising, of course. I've probably got more computing power in my home (where we usually have nine or ten computers at anyone time) than most nation-states could muster a few decades ago, and it does, in fact, allow me to do all sorts of things that individuals couldn't possibly have done on their own until such power became available. But the changes go beyond computers, which merely represent the first wave of exponential technological progress. People will have not only intellectual but physical powers previously unavailable to individuals. Changes will come faster and thicker than we have seen from the computer revolution so far.
Kurzweil discusses the Singularity, and what it's likely to mean, in excerpts from the following interview originally done for my blog, InstaPundit. [9] I encourage you to read his book, though, because the Singularity is, in a sense, the logical endpoint of the many near-term trends and events described in this book. The world is changing in a big way, and my reports might be likened to those from a frontline correspondent, while Kurzweil's writings are more in the nature of a strategic overview.
Reynolds: Your book is called The Singularity Is Near and -- as an amusing photo makes clear -- you're spoofing those "The End is Near" characters from the New Yorker cartoons.
For the benefit of those who aren't familiar with the topic, or who may have heard other definitions, what is your definition of "The Singularity"? And is it the end? Or a beginning?
Kurzweil: In chapter 1 of the book, I define the Singularity this way: "a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one's view of life in general and one's own particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a 'singularitarian.'"
The Singularity is a transition, but to appreciate its importance, one needs to understand the nature of exponential growth. On the one hand, exponential growth is smooth with no discontinuities, and values remain finite. On the other hand, it is explosive once we reach the "knee of the curve." The difference between what I refer to as the "intuitive linear" view and the historically correct exponential view is crucial, and I discuss my "law of accelerating returns" in detail in the first two chapters. It is remarkable to me how many otherwise thoughtful observers fail to understand that progress is exponential, not linear. This failure underlies the common "criticism from incredulity" that I discuss at the beginning of the "Response to Critics" chapter.
To describe these changes further, within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses, "experience beaming," and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned. But all of this is just the precursor to the Singularity. Nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. We'll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity.
Reynolds: Over what time frame do you see these things happening? And what signposts might we look for that would indicate we're approaching the Singularity?
Kurzweil: I've consistently set 2029 as the date that we will create Turing test-capable machines. We can break this projection down into hardware and software requirements. In the book, I show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a functional equivalent to all the regions of the brain. Some estimates are lower than this by a factor of 100. Supercomputers are already at 100 trillion (1014) cps, and will hit 1016 cps around the end of this decade. Two Japanese efforts targeting 10 quadrillion cps around the end of the decade are already on the drawing board. By 2020, 10 quadrillion cps will be available for around $1,000. Achieving the hardware requirement was controversial when my last book on this topic, The Age of Spiritual Machines, carne out in 1999, but is now pretty much of a mainstream view among informed observers. Now the controversy is focused on the algorithms ....
In terms of signposts, credible reports of computers passing the full Turing test will be a very important one, and that signpost will be preceded by non-credible reports of successful Turing tests.
A key insight here is that the nonbiological portion of our intelligence will expand exponentially, whereas our biological thinking is effectively fixed. When we get to the mid-2040s, according to my models, the non biological portion of our civilization's thinking ability will be billions of times greater than the biological portion. Now that represents a profound change.
The term "Singularity" in my book and by the Singularity-aware community is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we, with our limited biological brains, imagine what our future civilization, with its intelligence multiplied billions and ultimately trillions of trillions fold, will be capable of thinking and doing? Nevertheless, just as we can draw conclusions about the nature of black holes through our conceptual thinking, despite never having actually been inside one, our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. That's what I've tried to do in this book.
Reynolds: You look at three main areas of technology, what's usually called GNR for Genetics, Nanotechnology, and Robotics. But it's my impression that you regard artificial intelligence -- strong AI -- as the most important aspect. I've often wondered about that. I'm reminded of James Branch Cabell's Jurgen, who worked his way up the theological food chain past God to Koschei The Deathless, the real ruler of the Universe, only to discover that Koschei wasn't very bright, really. Jurgen, who prided himself on being a "monstrous clever fellow," learned that "Cleverness was not on top, and never had been." [10] Cleverness isn't power in the world we live in now -- it helps to be clever, but many clever people aren't powerful, and you don't have to look far to see that many powerful people aren't clever. Why should artificial intelligence change that? In the calculus of tools-to-power, is it clear that a ten-times-smarter-than-human AI is worth more than a ten megaton warhead?
Kurzweil: This is a clever -- and important -- question, which has different aspects to it. One aspect is what is the relationship between intelligence and power? Does power result from intelligence? It would seem that there are many counterexamples.
But to piece this apart, we first need to distinguish between cleverness and true intelligence. Some people are clever or skillful in certain ways but have judgment lapses that undermine their own effectiveness. So their overall intelligence is muted.
We also need to clarify the concept of power as there are different ways to be powerful. The poet laureate may not have much impact on interest rates (although conceivably a suitably pointed poem might affect public opinion), but s/he does have influence in the world of poetry. The kids who hung out on Bronx street corners some decades back also had limited impact on geopolitical issues, but they did play an influential role in the creation of the hip hop cultural movement with their invention of break dancing. Can you name the German patent clerk who wrote down his daydreams (mental experiments) on the nature of time and space? How powerful did he turn out to be in the world of ideas, as well as on the world of geopolitics? On the other hand, can you name the wealthiest person at that time? Or the U.S. secretary of state in 1905? Or even the president of the U.S.? ...
Reynolds: It seems to me that one of the characteristics of the Singularity is the development of what might be seen as weakly godlike powers on the part of individuals. Will society be able to handle that sort of thing? The Greek gods had superhuman powers (pretty piddling ones, in many ways, compared to what we're talking about) but an at-least-human degree of egocentrism, greed, jealousy, etc. Will post-Singularity humanity do better?
Kurzweil: Arguably, we already have powers comparable to the Greek gods, albeit, as you point out, piddling ones compared to what is to come. For example, you are able to write ideas in your blog and instantly communicate them to just those people who are interested. We have many ways of communicating our thoughts to precisely those persons around the world with whom we wish to share ideas. If you want to acquire an antique plate with a certain inscription, you have a good chance of quickly finding the person who has it. We have increasingly rapid access to our exponentially growing human knowledge base.
Human egocentrism, greed, jealousy, and other emotions that emerged from out evolution in much smaller clans have nonetheless not prevented the smooth, exponential growth of knowledge and technology through the centuries. So I don't see these emotional limitations halting the ongoing progression of technology.
Adaptation to new technologies does not occur by old technologies suddenly disappearing. The old paradigms persist while new ones take root quickly. A great deal of economic commerce, for example, now transcends national boundaries, but the boundaries are still there, even if now less significant.
But there is reason for believing we will be in a position to do better than in times past. One important upcoming development will be the reverse-engineering of the human brain. In addition to giving us the principles of operation of human intelligence that will expand our AI tool kit, it will also give us unprecedented insight into ourselves. As we merge with our technology, and as the nonbiological portion of our intelligence begins to predominate in the 2030s, we will have the opportunity to apply our intelligence to improving on -- redesigning -- these primitive aspects of it ....
Reynolds: If an ordinary person were trying to prepare for the Singularity now, what should he or she do? Is there any way to prepare? And, for that matter, how should societies prepare, and can they?
Kurzweil: In essence, the Singularity will be an explosion of human knowledge made possible by the amplification of our intelligence through its merger with its exponentially growing variant. Creating knowledge requires passion, so one piece of advice would be to follow your passion.
That having been said, we need to keep in mind that the cutting edge of the GNR revolutions is science and technology. So individuals need to be science and computer literate. And societies need to emphasize science and engineering education and training. Along these lines, there is reason for concern in the U.S. I've attached seven charts I've put together (that you're welcome to use) that show some disturbing trends. Bachelor degrees in engineering in the U.S. were 70,000 per year in 1985, but have dwindled to around 53,000 in 2000. In China, the numbers were comparable in 1985 but have soared to 220,000 in 2000, and have continued to rise since then. We see the same trend comparison in all other technological fields, including computer science and the natural sciences. We see the same trends in other Asian countries such as Japan, Korea, and India (India is not shown in these graphs). We also see the same trends on the doctoral level as well.
One counterpoint one could make is that the U.S. leads in the application of technology. Our musicians and artists, for example, are very sophisticated in the use of computers. If you go to the NAMM (National Association of Music Merchants) convention, it looks and reads like a computer conference. I spoke recently to the American Library Association, and the presentations were all about databases and search tools. Essentially every conference I speak at, although diverse in topic, look and read like computer conferences.
But there is an urgent need in our country to attract more young people to science and engineering. We need to make these topics cool and compelling.