An Army of Davids: How Markets and Technology Empower

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 11:09 pm

7: HORIZONTAL KNOWLEDGE

The Internet is a powerful tool. But most attention seems to focus on its use as a means of vertical communications: from one to many. Even when we talk about how it allows individuals to compete on an even basis with Big Media organizations, we're usually talking about its ability to facilitate a kind of communication that's akin to what Big Media has always done.

But as important as this is -- and it's very important indeed -- it's probably dwarfed by the much more numerous horizontal communications that the Internet, and related technologies like cell phones, text messaging, and the like permit. They allow a kind of horizontal knowledge that is often less obvious, but in many ways at least as powerful, as the vertical kind.

Horizontal knowledge is communication among individuals, who mayor may not know each other, but who are loosely coordinated by their involvement with something, or someone, of mutual interest. And it's extremely powerful, because it makes people much smarter.

People used to be ignorant. It was hard to learn things. You had to go to libraries, look things up, perhaps sit and wait while a book was fetched from storage, or recalled from another user, or borrowed from a different library. What knowledge there was spent most of its time on a shelf. And if knowledge was going to be organized and dispersed, it took a big organization to do it.

TINY BUBBLES

Guinness became a publishing sensation by cashing in on that ignorance. Bar patrons got into so many hard-to-settle arguments about what was biggest, or fastest, or oldest that Guinness responded with The Guinness Book of World Records, bringing a small quantity of authoritative knowledge to bear in a handy form.

Things are different today. I'm writing this in a bar right now, and I have most of human knowledge at my fingertips. Okay, it's not really a bar. It's a campus pizza place, albeit one with twenty-seven kinds of beer on tap, a nice patio, and, most importantly, a free wireless Internet hookup. With that, and Google, there's not much that I can't find out.

If I'm curious about the Hephthalite Huns [1] or the rocket equation [2] or how much money Joe Biden [3] has gotten from the entertainment industry, I can have it in less time than it takes the barmaid to draw me a beer. [4]

What's more, I can coordinate that sort of information (well, it might be kind of hard to tie those particular three facts together, but you take my meaning) with other people with enormous speed. With email, blogs, and bulletin boards, I could, if the topic interested enough people, put together an overnight coalition -- a flash constituency -- without leaving the restaurant. (And in fact, some folks did pretty much just that recently, and succeeded in killing the "super-DMCA" bill before the Tennessee legislature. Alarmed at a proposed law that would have made it a felony to connect a Tivo without permission from a cable company, they organized, set up a website, and shot down a bill that the cable companies had put a lot of time and money into.)

So what? Everybody knows this stuff, right? It has been the subject of countless hand-waving speeches about the revolutionary potential of the Internet, blah, blah, blah, yada, yada, yada. Well, sort of. Everybody knows it. Bur they don't know it, yet, down deep where it counts. And even those who kind of get it at that level tend to forget -- as even I sometimes do -- just how revolutionary it is. And yes, it really is revolutionary, in ways that would have defied prediction not long ago.

Just try this thought experiment: Imagine that it's 1993. The Web is just appearing. And imagine that you -- an unusually prescient type -- were to explain to people what they could expect by, say, the summer of 2003. Universal access to practically all information. From allover the place -- even in bars. And all for free!

I can imagine the questions the skeptics would have asked: How will this be implemented? How will all of this information be digitized and made available? (Lots of examples along the line of "a thousand librarians with scanners would take fifty years to put even a part of the Library of Congress online, and who would pay for that?") Lots of questions about how people would agree on standards for wireless data transmission -- "It usually takes ten years just to develop a standard, much less put it into the marketplace!" -- and so on, and so on. "Who will make this stuff available for free? People want to be paid to do things!" "Why, even if we start planning now, there's no way we'll have this in ten years!"

Actually, that final statement is true. If we had started planning in 1993, we probably wouldn't have gotten there by 2033, much less before 2003. The Web, Wi-Fi, and Google didn't develop and spread because somebody at the Bureau of Central Knowledge Planning planned them. They developed, in large part, from the uncoordinated activities of individuals.

Why can you find all sorts of stuff, from information about the Hephthalite Huns to instructions for brewing beer (yes, it always comes back to beer), and even recipes for cooking squirrel, on the Web? Because people thought it was cool enough (to them) to be worth the effort (on their part) of putting it online. We didn't need a thousand librarians with scanners because we had a billion non-librarians with computers and divergent interests. Wi-Fi sprang up the same way: not as part of a national plan by the Responsible Authorities, but as part of a ground-up movement composed of millions of people who just wanted it and companies happy to sell them the gear they needed to pull it off.

There are two lessons here. One is that the skeptics, despite all their reasonable-sounding objections, would have been utterly wrong about the future of the Web, a mere ten years after it first appeared. And the second is why they would have been wrong: because they didn't appreciate what lots of smart people, loosely coordinating their actions with each other, are capable of accomplishing. It's the power of horizontal, as opposed to vertical, knowledge.

As the world grows more interconnected, more and more people have access to knowledge and coordination. Yet we continue to underestimate the revolutionary potential of this simple fact. Heck, forget potential -- we regularly underestimate the revolutionary reality of it, in the form of things we already take for granted, like Wi-Fi and Google.

But I'm not a wild-eyed visionary. As a result, I'm going to make a very conservative prediction: that the next ten years will see revolutions that make Wi-Fi and Google look tame, and that in short order we'll take those for granted too. It's a safe bet.

Of course, not everyone is happy. The spread of horizontal knowledge is discomfiting big organizations that have depended on vertical organization. Not surprisingly, some of the first to be affected are those in the media.

In the old days, if you didn't like what you read in the newspaper, you could either complain to your neighbors, or send a letter to the editor that -- maybe -- would be published days or weeks later, when everyone had forgotten the story you were complaining about. And if you worked at a newspaper, you couldn't even do that. Newspapers aren't very enthusiastic about publishing letters from unhappy employees.

INSIDE, OUTSIDE, UPSIDE DOWN

For the New York Times, though, it became painfully obvious how that old system has broken down as the career of editor-in-chief Howell Raines came to an end. From the outside, bloggers like Andrew Sullivan and Mickey Kaus, along with specialty sites like TimesWatch, kept up constant pressure. Every distortion and misrepresentation (and there were plenty, of course) was picked up and noted. The result was a steady diminution of the Times's prestige among the opinion-making classes, something that opened it up for criticism in a way that it once didn't have to face because of the quasi-mystical awe in which many journalists have traditionally held it.

Meanwhile, the Internet also opened things up from the inside. Unhappy Times staffers in previous years could have grumbled to their colleagues at other papers, but such grumbling would have been largely futile. Now, on the other hand, thanks to email and websites such as Jim Romenesko's (and quite a few blogs that got leaked information), they could grumble to a major audience. They could also engage in that most devastating of insider activities, the leaking of sanctimonious and dumb internal memos from the bosses. (Note to bosses: If you distribute your dumb and sanctimonious memos on paper instead of via email, you'll face less of that because people can't just hit "forward" and send them on. Of course, another approach might be to write memos that aren't dumb and sanctimonious.)

Nick Denton, however, warned shortly after Raines's departure that there's a downside to this, what he calls "organizational terrorism" via Internet, a sort of asymmetrical warfare that's not necessarily a good thing.

Raines, sometimes crassly, was trying to institute change; the organizational reactionaries didn't like it. In a previous era, a manager would have been able to execute the ringleaders, and ride out the discontent. But Raines was up against a powerful combination of old labor unionism, and the new industrial action: a leak to a weblog, tittle-tattle over the IM, whispered conversations to Howard Kurtz.... [M]anagers may sometimes have the power to hire and fire, but the peasants have the Internet now.

Is that a good thing? I'm not sure. I can imagine large organizations -- all large organizations -- becoming more conservative, so concerned to maintain a happy workplace that they avoid change. For smaller organizations, in the media and other sectors, this may be an opportunity. [5]


Nick was right to warn about this possibility. Things will be different, and already are. Even in the military, email and chat rooms are flattening hierarchies and changing power dynamics. On the other hand, what the Internet peasantry hates most is not just power, but bogosity. Raines was disliked as much because he played favorites (and it was seen as a favoritism not based on performance) as because he was dictatorial: tough, but unfair. And -- just as students resent a professor who won't shut up their over-talkative peers more than they resent one who will -- employees don't necessarily resent managers who run a taut ship, so long as they feel that merit is being rewarded over sucking up.

So it may be that managers who do a good job have less to fear, and that it will be in the interest of the people who ultimately run many large organizations, like boards of directors, to pay closer attention to the performance of managers, and to what the employee samizdat is saying about them. That's one way in which horizontal knowledge could work to improve organizations, not sabotage them as Nick suggests, so long as the board members apply some good sense.

On a smaller scale, the new Times editors may want to look at putting horizontal knowledge to work for them in another way. It would be child's play to take RSS feeds from a number of blogs (say, via Technorati), filter them to extract the references to stories in the Times, and then have an ombudsman look at those references to see if correction, amplification, or investigation is called for. A newspaper that did that (and it could just as easily be done by any major paper, not just the Times) would be enlisting a huge (and unpaid!) army of fact-checkers, and could fix mistakes within hours of their appearing, thus turning inside its competition and enhancing its reputation, all at very low cost. I first suggested this three or four years ago, but it hasn't happened yet (though Times rival the Washington Post is making links to blogs mentioning its stories available to readers, which is a first step).

Will it happen? That depends on whether Big Media folks want to ride the wave of horizontal knowledge, or just try to keep their heads above water.

So far the signs aren't entirely promising. A lot of folks around the blogosphere got angry at the New York Times's John Markoff for comments he made to the Online Journalism Review, in which he likened blogs to CB Radio in the 1970s. But although Markoff meant to be dismissive, he was actually onto something, because CB radio was an early enabler of horizontal knowledge, with some pretty significant social and political consequences.

BIG BROTHER VS. THE CONVOY

Citizens' Band radio gets a bum rap nowadays-in most people's minds, it's associated with images of Homer Simpson (in the flashback scenes where he had hair) shouting, "Breaker 1-9" and singing C. W McCall's "Convoy!" loudly and off-key. In other words, something out of date and vaguely risible, like leisure suits or Tony Orlando.

But, in fact, CB was a revolution in its time, whose effects are still felt today. Before Citizens' Band was created, you needed a license to be on the air, with almost no exceptions. Radio was seen as Serious Technology for Serious People, nothing for normal folks to fool around with, at least not without government approval. Citizens' Band put an end to that, not by regulatory design but by popular fiat. Originally, a license was required for Citizens' Band too, but masses of people simply broke the law and operated without a license until the FCC was forced to bow to reality. It was a form of mass civil disobedience that accomplished in its sphere what drug-legalization activists have never been able to accomplish in theirs. No small thing.

And it didn't stop there. Citizens' Band radio became popular because of widespread resistance to another example of regulatory overreach: the unpopular fifty-five-mile-per-hour speed limit. Actually passed in 1974, but reenacted on Jimmy Carter's watch and popularly identified with Carter's "moral equivalent of war," speed limits were for the first time set not for reasons of highway safety, but for reasons of politics and social engineering. Americans rejected that approach in massive numbers and entered into a state of more-or-less open rebellion. CB was valuable -- as songs like "Convoy!" and movies like Smokey and the Bandit illustrated -- because it allowed citizens to spontaneously organize against what they saw as illegitimate authority. Before CB, the police -- with all their expensive infrastructure of radio networks, dispatchers, and patrol cars -- had the communications and observational advantage, but with CB each user had the benefit of hundreds or thousands of eyeballs warning about speed traps in advance. This made breaking the speed laws much easier and enforcing them much harder.

And it worked: the fifty-five-mile-per-hour speed limit was repealed. That (plus the gradual introduction of cheap and effective radar detectors, which allowed citizens to watch for speed traps while still listening to their car stereos) gradually ended the Citizens' Band revolution. Sort of, because like many fads, Citizens' Band didn't really go away. It just faded from view and turned into something else.

CB radio primed a generation that was used to top-down communication on the network-news model for peer-to-peer communication, getting people in the right frame of mind for the Internet, cell phones, and text messaging. It also served as a vehicle for spreading countercultural resistance to authority beyond the confines of hippiedom, taking it deep into the heart of middle America.

In fact, it's probably not too much of a stretch to say that this combination of resentment over Big Brother intrusiveness, coupled with the means of resisting those intrusions, laid the groundwork for the antigovernment explosions of the 1980s. A lot of people used CB radio to evade the unpopular speed limit, and Carter wound up losing to Ronald Reagan, who preached individual freedom and deregulation. It's hard to know which way the causality runs here -- did CB make Reagan's election more likely, by fanning the flames of antibureaucratic sentiment? Or was it just an early indicator of that sentiment? Who knows?

But either way, it was something important. And so it is with more modern technologies, like blogs and text messaging and Internet video. Like CB, they may well vanish from public attention, if not from the actual world (plenty of CB radios still get sold, after all -- in fact, after being stuck in an endless traffic jam on 1-40 a couple of weeks back, I just ordered one myself). And they'll probably be replaced, or absorbed, by new technology within a few years. But they're popular right now because people want to get around Big Media's stranglehold on news and information, just as CBs were popular with people who wanted to get around speed limits. And, like Jimmy Carter, Big Media folks seem largely clueless about what's going on.

Of course, it's not just the media who face threats from insiders. Governments, too, face new kinds of pressure from horizontal knowledge in a way that the CB revolution didn't foreshadow.

That seems to be the case for the United States government, the ultimate large organization. According to a report by Bill Broad in the New York Times, employee-bloggers have been giving the Los Alamos National Lab and the Department of Energy fits:

A blog rebellion among scientists and engineers at Los Alamos, the federal government's premier nuclear weapons laboratory, is threatening to end the tenure of its director, G. Peter Nanos.

Four months of jeers, denunciations and defenses of Dr. Nanos's management recently culminated in dozens of signed and anonymous messages concluding that his days were numbered. The postings to a public weblog conveyed a mood of self-congratulation tempered with sober discussion of what comes next. [6]


And that's perhaps an appropriate mood for the blogosphere as a whole. On the one hand, we've started to see a switch: where an earlier generation of articles on employee blogging warned employees about the danger of retribution from employers, a newer version of the story warns employers about the power of bloggers in their midst.

On the other hand, it's hard for organizations to operate when dissent becomes easier, and more popular, than actually running things or doing work. Whistle-blowing is all very nice, but no organization made up largely of whistle-blowers is likely to thrive. While "organizational terrorism" may be a bit strong, Nick was certainly right to note that one of management's major advantages was informational -- it could know more, and communicate more to more people, than dissident employees hanging around the water cooler could.

That's changed now, and there's no doubt that it makes managers nervous. Still, I think the Los Alamos case also underscores what I wrote above in response to Nick Denton: The flattening of hierarchies that easier communication produces is a bigger threat to bad managers than to good ones, and in fact it's a useful tool for managers who want to know what's really going on.

Say what you will about the Los Alamos scandals, but no one has accused the lab of being a taut ship, or of rewarding merit above all else. While Internet samizdat may pose a threat to managers, it still seems to me that the threat is biggest where the management is the worst, and that exposing bad management and unhappy employees isn't necessarily such a bad thing.

The biggest danger, at any rate, won't come from the internal blogging. It will come from management's overreaction to internal blogging. If managers are afraid of internal bloggers and respond either with witch hunts and efforts to shut them down, or -- perhaps worse, from a standpoint of organizational health -- try too hard to appease dissidents by trying to run their companies or organizations in ways that won't offend anyone, the damage will be far greater than the damage done by bloggers.

With or without bloggers in the mix, management requires a backbone. The smarter managers will read blogs, looking for real problems that need to be fixed, and they'll respond (perhaps on their own blogs?) to the critics. The smartest ones will even realize that employees know the difference between the chronic bellyachers and the people who have serious complaints and will respond accordingly. Easier communication is actually a useful asset to managers who wonder whether the folks below them are reporting the truth or presenting a rosy scenario designed to cover their asses. Some have figured this out already, and a "Wall Street Journal study in October of 2005 found that many CEOs encourage open email communication with staff precisely for these reasons. [7] Extending that to reading blogs is a logical next step for the smart managers.

How many managers are this smart? I guess, thanks to the Internet, we'll find out.

THE INSIDE-OUT PANOPTICON

But of course -- as the CB era demonstrated -- there's more to horizontal knowledge than workplace carping. Dictators, and even democratic governments not terribly enthused about opposition, have traditionally discouraged communication among the citizenry. Vertical communication is just, well, safer for those in power.

That's certainly what happened when Philippine President Joseph Estrada was ousted in a "people power" revolution organized by cell phones and text messages: Over 150,000 protesters appeared on short notice, thanks to technologies that allowed a flash mob to appear without the kind of big, central organization it would have taken in the past. Other technologies are doing the same kind of thing. Musician Peter Gabriel founded a human rights group called Witness that distributes video cameras to human rights groups. Activists say that government and private thugs who might have taken violent action against them have often been deterred by the fear that video of their actions might become public. [8]

Combining video cameras and cell phones, as technology is in the process of doing, only intensifies the effect. An ordinary video camera can be confiscated and its tape destroyed, but a video camera that can transmit video wirelessly can be relaying the information to hundreds, thousands, or millions of people -- who may react angrily and spontaneously if anything happens to the person doing the shooting.

This represents the political future, for good and ill. I'm inclined to think that it's mostly good, but there are two sides to what Howard Rheingold calls "smart mobs." If the toppling of dictators via people power is one side, then riots by mobs of the ignorant are the other. As Rheingold observes:

On the political level, you're seeing peaceful democratic demonstrations like the ones [that brought down President Joseph Estrada] in the Philippines. You're also seeing riots, like the Miss World riots in Nigeria. Not all forms of human cooperation are pro-social. Some of them are antisocial. [9]


Absolutely. As Clive Thompson noted, the Miss World riots in Nigeria were organized by Muslim fundamentalists who took umbrage at a newspaper story they regarded as insufficiently respectful of Islam (it said that one of the contestants was pretty enough to have been chosen by Mohammed). Word spread by cell phone and text message, and the result was a mob attack on the newspaper offices. Mobs can take down dictatorial governments, Thompson pointed out, but they can also engage in lynchings. [10]

Well, yes. Communications can make it easy for democrats and human-rights activists to coordinate via cell phone and the Internet -- as they did in the Philippines, in the Ukraine, and in Lebanon -- but it can also make it easy for mobs of the ignorant or vicious to coalesce in response to bogus rumors. (These might be called "dumb mobs," I suppose.) The tools empower the individuals and make them "smarter" in terms of coordination and access to information. But they're smart mobs, not wise mobs. Wisdom comes from other sources, when it comes at all.

Still, we've certainly managed to hold riots and organize dumb mobs in the absence of technology since, well, the beginning of human history. What is new isn't the potential for mob action (Hitler used a mass medium, radio, to put together the ultimate dumb mob), but the potential for constructive and spontaneous group action. Nonetheless, like most technological changes that promise good, it won't happen all on its own. We need to be looking for ways to maximize the upside, and minimize the downside, as these things spread.

That may not be as hard as it sounds. Riots are sometimes spontaneous, but they're usually more organized than they look. Somebody -- gangs hoping for loot, religious zealots trying to raise a mob to smite unbelievers, government officials wanting to crush dissent -- gives things a push, usually figuring that their responsibility for doing so will be lost in the fog created by the riot and its aftermath. Then the mob forms, and the individuals who make it up do things, secure in the anonymity of the mob, that they would never do on their own.

Pervasive cameras and reporting make both aspects harder, and riskier. (And readily available information means that potential victims can avoid riots, and law enforcement authorities will have a better idea of what's going on, if they make proper use of what is available.) Like everything, it's a mixed bag, but I think it's unlikely that technology will do as much to empower dumb mobs as it does to promote smart ones.

And, as it happens, I have a few thoughts on how to help maintain that imbalance. Wherever possible, we should look for opportunities to inject truth and moderation into the web of horizontal communications. Rumor-debunking sites like Snopes.com are a good example -- in a Web-based world, Snopes serves as a sort of anti-Guinness Book, helping to neutralize false claims of the outrageous or upsetting.

It's also the case that, as with management in companies, government and antihate organizations can take advantage of what mass horizontal communications have to offer in the way of transparency. What bubbles up through blogs, chat boards, and email lists may be wrong, but it's a useful guide to what people are thinking, offering opportunities to counter rumors, incitements, and falsehoods before they reach critical mass. (I also suspect that emergency authorities could get a lot of useful information -- and not only in terms of pending riots -- just by watching for a sudden spike in text-messaging.)

What's more, to the extent that people can organize for constructive things as they've done with matters ranging from tsunami relief, to hurricane relief, to such collaborative research projects as SETI@home (where number-crunching is parceled out to members' computers, creating a massively parallel computing project on the cheap), to political efforts like FreeRepublic or DailyKos, the result is to discourage destructive efforts in favor of constructive ones. Not perfectly, of course, but more often than not.

At any rate, we'd best be thinking of ways to capitalize on horizontal knowledge, because it's likely here to stay. Turning back the clock on the communications revolution would probably be impossible and would certainly be vastly expensive. 1 don't expect that it will happen, which means we'd better figure out how to live with the change.
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 11:24 pm

8: HOW THE GAME IS PLAYED

Once upon a time, breadth of experience -- both firsthand and through books, anecdotes, and institutional wisdom -- was one of the things that separated the aristocracy from the peasantry. Peasants knew their daily lives and surroundings, but not much else. Only the nobility -- and its hanger-on cultures of soldiers, scribes, clerics, and scholars -- had experience in the broader world.

That's not so true anymore. Mostly, of course, that's because ordinary people travel more, meet more people, and accomplish more than any peasant (or king) could have imagined a few centuries ago.

But the virtual world promises to do even more to expand the range of human experience. Not everyone is happy about that, but it's a trend that, in my opinion at least, can't be stopped, and shouldn't be stopped. Shaping it, on the other hand, may be worth some thought.

XBOX WARRIORS

Legislators around the country are trying to ban violent video games as immoral and dangerous in their effect on children. According to Wired News. "Lawmakers in at least seven states proposed bills during the most recent legislative session that would restrict the sale of games, part of a wave that began when the 1999 Columbine High School shootings sparked an outcry over games and violence." [1] The article notes that the bills are supported by "pediatricians and psychologists."

Actually, other psychologists (including my wife, a specialist in violent kids [2]) disagree with this assessment regarding video games and violence. And why we should care what pediatricians think about video games is beyond me -- what do they know about this stuff? (Most of the time they can't diagnose my daughter's strep throat correctly, which makes me doubt that their professional wisdom extends to complex social-psychological matters.)

But the move against violent video games strikes me as a bad idea for other reasons. Not only does it represent an unconstitutional infringement on free speech -- as the Wired News story notes, "None of the measures that passed have survived legal challenge," -- but it may actually make America weaker.

American troops are already using video games in training. Some are fancy custom jobs, like the combat simulators described in this article by Jim Dunnigan at StrategyPage:

[The simulators] surround the trainees and replicate the sights and sounds of an attack. Weapons equipped with special sensors allow the troops to shoot back from mockups of vehicles, and they also receive feedback if they are hit .... One problem with the ambushes and roadside bombs is that not every soldier driving around Iraq will encounter one, but if you do, your chances of survival go up enormously if you quickly make the right moves. The troops know this, and realistic training via the simulators is expected to be popular. [3]


The Army has also developed a game called "America's Army," originally intended as a recruiting tool, that has turned out to be realistic enough that it's used by the military for training purposes. [4] These training games draw heavily on existing technology, most of it developed for consumer-market video games. (And, in fact, the military uses some consumer games in training too.) They also draw on troops' skills at rapidly mastering such simulators, skills likely honed on consumer video games.

What's more, civilians who play military video games may acquire useful knowledge. This knowledge may even have political ramifications. When television commentators second-guess things that happen in combat -- often showing an astounding degree of military ignorance in the process -- people who have played military video games are more likely to see through it. At the very least, they have some sense of how fast things can happen, and how confusing they can be.

(SIMULATED) WAR: WHAT IS IT GOOD FOR?

In fact, shortly after 9/11 Dave Kopel and I wondered if the spread of military knowledge via war-gaming might lead to changes in the way war is perceived by Americans. We also noted that war games have played an important educational role at all sorts of levels. [5]

As a population, the American public probably has greater expertise concerning serious military history than any previous society. This expertise has been acquired steadily over the past four decades, and it has happened largely without notice from the media, academics, or the punditocracy. What's more, people have become more knowledgeable in spite of the removal of most military subjects from the mainstream educational curriculum, and despite the PC movement's success in driving military history out of history departments.

One reason that this underground military education has gone unnoticed is that the people acquiring the expertise are mostly techno-geeks, the very people that some commentators point to as evidence of our un martial character. Yet to anyone who knows it, geek culture is full of military aspects.

Military history is a popular interest among geeks. So is skill with firearms. As an article in Salon noted awhile back, geeks tend to be strong gun-rights enthusiasts, regarding both computers and firearms as technologies that empower the individual. [6] Geeks, knowing that they can program their VCR, also believe themselves capable of cleaning a gun safely.

Some geeks take their enthusiasm further, engaging in massed battles with broadswords and maces as part of the Society for Creative Anachronism's popular rounds of medieval combat. Though the weapons are usually blunt or padded, injuries are about as common as in rugby and football, and the rules are far less refined. Geeks also read military science fiction by authors like David Drake, Jerry Pournelle, S. M. Stirling, Eric Flint, and Harry Turtledove, in which war is not glorified or simplified, but presented in surprisingly realistic fashion.

But the biggest source of geek military knowledge comes from that staple of geek culture, war-gaming. Ever since the introduction of war games in the early 1960s by companies like Avalon Hill and Simulations Publications Inc. (SPI), geeks have made war-gaming a major pastime. The games, once played on boards with cardboard counters, now often run on PCs and realistically reflect all sons of concerns, from logistics, to morale, to the importance of troop training.

War-gaming, like chess, has always been an activity mainly for intelligent males. At the peak of board-based war-gaming, in the late 1970s and early 1980s, most good high schools had a wargame dub. And you can be sure that the average member of the war-game dub ended up with a job and an income far ahead of the average student at the school.

Board-based military games attracted a smaller set of the geek population in subsequent decades, as computers became a new way for geeks to have fun, and as Dungeons & Dragons (originally just a small part of the war-gaming world) became enormously popular, spawning scores of imitators.

Avalon Hill, the founding father of the industry, has been taken over by Hasbro, which has junked most of AH's once-formidable catalogue. Today, Decision Games is probably the leading war-game publisher, with the flagship magazine Strategy &Tactics (a military-history magazine with a game in every issue), and with a catalogue of board and computer games ranging from Megiddo (1479 BC, the epic chariot dash between Egypt's Tuthmosis III and the King of Kadesh) all the way to the 1973 Arab- Israeli war.

Today's computer format for games works better at creating "the fog of war," since the computer can hide pieces. The computer also makes it easier to play Solitaire -- and solitaire was always a major form of war-game play; the players were attracted by the ideas, not by the chance to chat while playing Bridge.

How well have war games taught war? Well enough that several war games have been used as instructional or analytical tools by the United States military.

Over the years, game designers learned how to playtest games before publication, so that players would be forced to address real strategy and tactics, as opposed to manipulating artifacts of the game system. No game could possibly simulate everything realistically, but the best games pick some key challenges faced by the real-world commanders and make the players deal with the same problems. For example, the many games depicting the 1941 German invasion of the U.S.S.R. find the German player with near total military superiority in any given battle -- but always wondering whether to outrun his supply lines, and conquer as much ground as possible, before the winter sets in. Other games make the players work on the delicate balance of combined arms -- learning how to make infantry, tanks, and artillery work together in diverse terrain, and learning what to do when your tanks are all destroyed but the enemy still has fifteen left.

Some war-gamers prefer purely tactical games, such as plane-to-plane or ship-to-ship combat. These players come away with amazing amounts of knowledge about submarines, or fighter planes, or Greek triremes, or dreadnaughts. And since real wargamers like lots of different games, many learn, in-depth, about many different military subjects.

Even the least successful games teach a good deal of geography and history. And they always demonstrate how the "right" answer to a military strategy question is usually clear only in hindsight.

The war-gaming magazines are all about military history, naturally, and most war-gamers end up reading military history and strategy books too. If you ask, "Who was Heinz Guderian?" most people will guess, "Some sort of ketchup genius?" War gamers will be ones who answer: "The German general who invented modern tank warfare, and who wrote a famous memoir, Panzer Leader."

Most people who war-game don't become real warriors -- although the games have always been especially popular at military academies. Anyone who spends a few hundred hours playing war-games (and many hobbyists put in thousands of hours) will soon know more about the nuts and bolts of warfare than most journalists who cover the subject or most politicians who vote on military matters.

So here's the funny thing. While the official American culture around, say, 1977 was revolted by anything military, a bunch of the nation's smartest young males -- the "leaders of tomorrow" -- were reading Panzer Leader and Sir Basil Henry Liddell Hart's Strategy, [7] and, of course, Sun Tzu's Art of War, long before it became a business-school cliche.

This was no accident. Many of those who founded the war-game publishing business feared that, with the antimilitarism caused by the Vietnam War and, later, with the adoption of the all-volunteer army, American society would become estranged from all things military, leaving ordinary citizens too ignorant to make meaningful democratic judgments about war. They hoped that realistic simulation games would teach important principles.

We're only now testing the societal effect of having such a large number of knowledgeable citizens. The Gulf War was too short, and too much of a set piece, for public military knowledge to playa major role. But there's reason to believe that it will be different this time -- especially as the favored geek mode of communication, the Internet, is now pervasive. This means that geeks' knowledge, and their knowledgeable opinions, will have substantial influence. They will be able to put the military events of any given day into a much broader perspective, and they may be opinion leaders who help their friends and neighbors avoid the error of thinking that the last fifteen minutes of television footage tell the conclusive story of the war's progress. The role of warbloggers -- and military bloggers -- so far has certainly seemed to fit this bill.

The phenomenal educational effort of the war-game publishers has ensured that, despite the neglect of matters military by most educational institutions, important aspects of military knowledge were kept alive and taught to new generations of Americans, in a fashion so enjoyable that many didn't even realize they were being educated.

VIRTUAL DATING AND OTHER VITAL EDUCATIONAL TOOLS

Of course, the usefulness of computer games as an educational technique goes well beyond war, as I discovered recently firsthand when I heard my daughter and one of her friends having an earnest discussion: "You have to have a job to buy food and things, and if you don't go to work, you get fired. And if you spend all your money buying stuff, you have to make more."

All true enough, and worthy of Clark Howard or Dave Ramsey. And it's certainly something my daughter has heard from me over the years. Bur rather than quoting paternal wisdom, they were talking about The Sims, a computer game that simulates ordinary American life, which swept through my patch of Little-Girl Land at breakneck speed. Thanks to The Sims, the girls know how to make a budget and how to read an income statement -- and to be worried when cash flow goes negative. They understand comparison shopping. They're also picking up some pointers on human interaction, though The Sims characters come up short in that department. (Then again, so do real people, now and then.)

Now The Sims 2 has upped the stakes. Among other things, as its label makes clear, it allows players to "Mix Genes: Your Sims have DNA and inherit physical and personality traits. Take your Sims through an infinite number of generations as you evolve their family tree."

What more could a father want than a game that will teach his daughter that if you marry a loser, he'll likely stay a loser, and if you have kids with him, they'll have a good chance of being losers too? Thank God for technology.

All joking aside, I'm impressed with the things that these games teach. I've already mentioned the value of video games in teaching warlike skills, but of course those aren't the only skills games can impart, just the ones for which there was a large and early market. But as the technology improves, and people get more and more used to computers, I think we'll see a lot more games that teach as they entertain. SimWorld isn't the real world, of course. But it's a world in which actions have consequences, and not necessarily happy ones. (Your Sim characters can die, if you let them screw things up too much -- and they can have extramarital affairs, which as in real life, usually turn out badly for all concerned.) It's a world in which narcissism, hedonism, and impulsiveness are punished, and in which traditional middleclass virtues, like thrift and planning, generally payoff. In short, it's a world that's a lot more like the real world than the fantasy worlds of movies, popular songs, and novels -- the places where children and adolescents have traditionally gotten their nonparental information on how life works.

And kids find this stuff more interesting than movies, popular songs, and novels, at least judging from the degree of addiction The Sims has produced among my daughter's crowd. Which means that we have not only a powerful teaching tool, but a powerful teaching tool that people actually want to learn from. It's not quite A Young Lady's Illustrated Primer, the computerized tutorial from Neal Stephenson's novel The Diamond Age, but you can see things moving in that direction.

What's more, it's a powerful teaching tool that people buy. The government does not decree the use of such a game from on high; instead it's a creation of a free market that had entertainment, not instruction, as its primary goal. And it's teaching something that most kids don't get in school or at home. I don't think that The Sims will replace schools, but it's interesting to see a consumer product providing an education that is, in some ways, more rigorous than many schools provide.

THE KIDS ARE ALRIGHT

It may be making a difference. At the very least, the fears of the video-game critics seem to be stillborn. American teenagers are doing better than ever, and people are trying to figure our why. Games just might have something to do with it; at the very least, they don't seem to be hurting.

Teen pregnancy is down, along with teen crime, drug use, and many other social ills. There's also evidence that teenagers are more serious about life in general and are more determined to make something worthwhile of their lives. Where just a few years ago the "teenager problem" looked insoluble, it now seems well on the road to solving itself. [8] But why?

Reading about this change, it suddenly occurred to me that I had the answer: porn and video games. That's what's making American teens healthier!

It should have been obvious. After all, one of the great changes in teenagers' social environments over the past decade or so has been far greater exposure to explicit pornography, via the Internet; and violence, via video games. Where twenty or thirty years ago teenagers had to go to some effort to see pictures of people having sex, now those things are as close as a Google query. (In fact, on the Internet it takes some small effort to avoid such pictures.) Meanwhile, video games have gotten more violent, with efforts to limit their content failing on First Amendment grounds.

But, despite continued warnings from concerned mothers' groups, teenagers are less violent, and -- according to some, if not all, studies -- they're having less sex, notwithstanding the predictions of many concerned people that such exposure would have the opposite effect. More virtual sex and violence would seem to go along with less real sex and violence; certainly with less pregnancy and violence. [9]

The solution is clear -- we need a massive government program to ensure that no American teenager goes without porn and video games. Let no child be left behind! Well, no. Not even I'm ready to argue for that kind of legislation, though I suppose candidates interested in the youth vote might want to give it a thought. But the real lesson is that complex social problems are, well, complex, and that the law of unintended consequences continues to apply.

When teen crime and pregnancy rates were going up, people looked at things that were going on -- including increased availability of porn and violent imagery -- and concluded that there might be something to that correlation. It turned out that there wasn't. Porn and Duke Nukem took over the land, and yet teenagers became more responsible and less violent.

Maybe the porn and the video games provided catharsis, serving as substitutes for the real thing. Maybe. And maybe there's no connection at all. (Or maybe it's a different one -- the research indicates that teenagers, though safer and healthier, are also fatter -- so perhaps the other improvements are the result of teens sitting around looking at porn and video games until they're too out-of-shape and unattractive for the real thing.) Most likely, the lesson is that -- once again -- correlation isn't causation, despite policy entrepreneurs' efforts to claim otherwise.

Regardless, the fears of the doomsayers have not come to pass. People can continue to claim that psychological research suggests that video games lead to violence and that porn leads to promiscuity, but in the real world the evidence suggests otherwise. So perhaps we should reconsider regulating video games. And we should definitely take claims of impending social doom with a grain of salt. (Hey, while we're at it, why not encourage surfing porn and playing shoot-'em-up games? After all, as the activists say, if it saves just one child, it's worth it!)

More seriously, such a lack of evidence is reason enough not to shut down the virtual worlds that kids are inhabiting. Instead, we may want to look at the lessons they learn. I don't think that Duke Nukem or Grand Theft Auto are particularly harmful, but it would be useful for people to think about ways of making those games teach productive real-world lessons, and I think that can be done without making them uninteresting. The real world is interesting, after all, and it's very, very good at teaching real-world lessons. The advantage of the virtual world is that those lessons can be learned without bloodshed, bankruptcy, or jail. Seems like a good thing to me.
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sun Nov 03, 2013 2:18 am

9: EMPOWERING THE REALLY LITTLE GUYS

All sorts of new technologies promise to empower individuals, but the ultimate empowerer of ordinary people may well turn out to be nanotechnology, the much-hyped but still important technology of molecular manufacturing and computing. Indeed, for all the nano-hype, the reality of nanotechnology may turn out to exceed the claims. The result may be as big a change as the Industrial Revolution, but in a different direction.

Nanotechnology derives its name from the nanometer, or a billionth of a meter, and refers to the manipulation of matter at the atomic and molecular level. The ideas behind nanotechnology are simple ones: every substance on Earth is made up of molecules composed of one or more atoms (the smallest particles of elements). To describe the molecules that constitute a physical object and how they interrelate is to say nearly everything important about the object. It follows, then, that if you can manipulate individual atoms and molecules and put them together in certain configurations, you should be able to create just about anything you desire. And if technologies like computers and the Internet have empowered individuals by giving them drastically more control over the organization of information, the impact of nanotechnology -- which promises similar control over the material world -- is likely to be much greater. This goes well beyond homebrewing beer, though, as with making beer, nanotechnology involves letting someone else do the hard work at the microscopic level.

Richard Feynman's first description of nanotechnology still serves:

The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. . . . [I]t would be, in principle, possible for a physicist to synthesize any chemical substance that the chemist writes down. How? Put the atoms down where the chemist says, and so you make the substance. The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed -- a development which I think cannot be avoided. [1]


Modern nanotechnology researchers want to go beyond synthesizing "substances" (though that has great importance) to use nanotechnology's atom-by-atom construction techniques to produce objects: tiny, bacterium-sized devices that can repair dogged arteries, kill cancer cells, fix cellular damage from aging, and (via what are called "assemblers") make other devices of greater size or complexity by plugging atoms, one at a time, into the desired arrangements, very quickly. Other researchers believe that nano-technology will allow for a degree of miniaturization that might permit computers a millionfold more efficient than anything available now. Still others believe that nanotechnology's tiny devices will be able to unravel mysteries of the microscopic world (such as cell metabolism, the aging process, and cancer) in ways that other tools will not be able to.

So far, pioneers like Eric Drexler and Robert Freitas have worked out a lot of the details, and research has produced some small devices, but nothing as exotic as those described above. But nanotechnologists are refining both their instrumentation and their understanding of nanofabrication at an accelerating rate. Will they be able to fulfill the field's promise? Richard Feynman thought so. That raises a lot of interesting possibilities -- and questions.

The digital revolution brought us a debate over the difference between virtual reality and physical reality, a distinction the courts are still trying to figure out. But we are also at the dawn of a new technological revolution -- the nanotech revolution -- that may challenge our definition of what physical reality is. Superman could create diamonds by squeezing lumps of coal, using heat and pressure to rearrange the carbon atoms. Nanotechnology could achieve the same transformation, with considerably less fuss, simply by plugging carbon atoms together, one at a time, in the correct manner -- and without the embarrassing blue tights.

This sounds like the stuff of science fiction, and it is: In Michael Crichton's thriller, Prey, nanotech plays the bad guy. But in real life, nanotech is already being used by everyone from Lee Jeans, which uses nanofibers to make stain-proof pants, to the U.S. military, which uses nanotechnology to make better catalysts for rockets and missiles, to scientists who are using nanotechnology to develop workable artificial kidneys. [2]

"JUST ADD SUNLIGHT AND DIRT"

Many scientists initially doubted that nanotechnology's precise positioning of molecules was possible, but that skepticism appears to have been misplaced. That's no surprise, really, since living organisms, including our own bodies, make things like bone and muscle by manipulating individual atoms and molecules. Yet as criticism has shifted from claims that nanotechnology won't work to fears that it might, there have been calls to stop progress in the field of nanotechnology before research really gets off the ground. The ETC Group, an anti-technology organization operating out of Canada, has proposed a moratorium on nanotechnology research and on research into self-replicating machines. (At the moment, the latter is like calling for a moratorium on antigravity or faster-than-light travel -- nobody's doing it anyway.)

Proponents of this line of criticism face an uphill battle. What's attractive about devices that can be programmed to manipulate molecules is that they let you make virtually anything you want, and you can generally make it out of cheap and commonly available materials and energy -- what nanotech enthusiasts call "sunlight and dirt." Selectively sticky probes on tiny bacterium-scale arms, attached either to tiny robots or to a silicon substrate and controlled by computer, can grab the atoms they need from solution, and then plug them together in the proper configuration. It's not quite molecular Legos, but it's close. General purpose devices that can do this are called "assemblers," and the process is known among nanotechnology proponents as "molecular manufacturing."

This process raises some problems of its own, though. Assemblers that can manufacture virtually anything from sunlight and dirt might, as the result of a program error, manufacture endless copies of themselves, which would then go on to make still more copies, and so on. The fear that nanobots might turn the world into mush is known in the trade as the "gray goo problem," the apocalyptic scenario raised in Crichton's novel.

Nanotech's backers, however, believe the real problem won't be accident, but abuse. With mature nanotechnology, it might be possible to disassemble enemy weapons. (Imagine bacterium-sized devices that convert high explosives into inert substances, a technique that would neutralize even nuclear weapons, whose detonators are made of chemical high explosive.) On a more threatening note, sophisticated nanodevices could serve as artificial "disease" agents of great power and subtlety. Highly sophisticated nanorobots could even hide out in people's brains, manipulating their neurochemistry to ensure that they genuinely loved Big Brother. Like nuclear weapons, these devices would be awesome in their destructiveness, and their misuse would be terrifying. Still, the race to harness this power is well underway: Defense spending on nanotechnology is climbing, and civilian spending is over $1 billion a year. [3]

In a world in which the promises of nanotechnology were realized, practically anyone could live a life that would be extraordinary by today's standards, in terms of health (thanks to nanomedicine) and material possessions. DNA damaged by radiation, toxins, or aging could be repaired; arterial plaque could be removed; and cancerous or senescent cells could be destroyed or fixed. Organs could be replaced or even enhanced. Researcher Robert Freitas surveys many of these issues in his book Nanomedicine, which explores such topics as "respirocytes" -- tiny devices in the bloodstream that could deliver oxygen when the body wasn't able to, protecting against everything from drowning to heart attacks and strokes long enough to allow medical assistance. And this just scratches the surface in terms of potential enhancements, which might also involve stronger muscles, better nerves, and enhanced cognition -- the last being the subject of an ongoing Department of Defense research project already. [4]

Most physical goods could be manufactured onsite at low cost from cheap raw materials. Imagine owning an appliance the size of a refrigerator, full of nanoassemblers, that ran on sunlight and dirt (well, solar electricity and cheap feedstocks, anyway) and made pretty much everything you need, from clothing to food. The widespread availability of such devices would make things very, very different. Material goods wouldn't be quite free, but they would be nearly so.

In such a world, personal property would become almost meaningless. Some actual physical items would retain sentimental value, but everything else could be produced as needed, then recycled as soon as the need passed. (As someone who writes on a laptop that was cutting edge last year and is now old news, with its value discounted accordingly, I sometimes think we're already there except for the recycling part. Don't even ask about my MP3 player.)

Real property would retain its value -- as my grandfather used to say, "They're not making any more of it," especially oceanfront acreage -- but what would "value" mean? Value usually describes an object's ability to be exchanged for another item. But with personal property creatable on demand from sunlight and dirt, it's not dear what the medium of exchange would be. Value comes from scarcity, and most goods wouldn't be scarce. Intellectual property -- the software and designs used to program the nanodevices -- would be valuable, though once computing power became immense and ubiquitous, developing such designs wouldn't be likely to pose much of a challenge.

One thing that would remain scarce is time. Personal services like teaching, lawyering, or prostitution wouldn't be cheapened in the same fashion. We might wind up with an economy based on the exchange of personal services more than on the purchase of goods. As I mentioned earlier, that's where we're headed already to a point. Even without nanotechnology, the prices of many goods are falling. Televisions, once expensive, are near-commodity goods, as are computers, stereos, and just about all other electronics. It's cheaper to build new ones than to fix old ones, and prices continue to fall as capabilities increase. Nanotechnology would simply accelerate this trend and extend it to everything else. Ironically, it may be the combination of capitalism and technology that brings about a utopia unblemished by the need for ownership, the sort that socialists (usually no fans of capitalism) and romantics (no fans of technology) have long dreamed of.

PIONEERS' PROGRESS

We're not there yet, but things are progressing faster than even I had realized. Recently, I attended an EPA Science Advisory Board meeting where nanotechnology was discussed. What struck me is that even for people like me who try to keep up, the pace of nanotechnology research is moving much too fast to catch everything.

One of the documents distributed at that meeting was a supplement to the president's budget request, entitled National Nanotechnology Initiative: Research and Development Supporting the Next Industrial Revolution. [5] I expected it to be the usual bureaucratic pap, but in fact, it turned out to contain a lot of actual useful information, including reports of several nanotechnology developments that I had missed.

The most interesting, to me, was the report of "peptide [ring] nanotubes that kill bacteria by punching holes in the bacteria's membrane." You might think of these as a sort of mechanical antibiotic. As the report notes, "By controlling the type of peptides used to build the rings, scientists are able to design nanotubes that selectively perforate bacterial membranes without harming the cells of the host." [6] It goes on to note, "In theory, these nano-bio agents should be far less prone than existing antibiotics to the development of bacterial resistance." [7] What's more, if such resistance appears, it is likely to be easier to counter. Given the way in which resistance to conventional antibiotics has exploded, this is awfully good news.

Another item involved the use of nanoscale particles of metallic iron to clean up contaminated groundwater. In one experiment, aimed at the contaminant trichloroethylene (TCE) , the results were quite impressive: "The researchers carried out a field demonstration at an industrial site in which nanoparticles injected into a groundwater plume containing TCE reduced contaminant levels by up to 96 percent." The report goes on to observe, ''A wide variety of contaminants (including chlorinated hydrocarbons, pesticides, explosives, polychlorinated biphenyls and perchlorate) have been successfully broken down in both laboratory and field tests." [8] Not too shabby.

And there's more: the development of nanosensors capable of identifying particular microbes or chemicals, of nanomotors, and dramatic advances in materials. These advances shouldn't be underestimated.

We tend to forget this, but it's possible for a technology to have revolutionary effects long before it reaches its maturity. The impact of high-strength materials, for example, is likely to be much greater than people generally realize. Materials science isn't sexy the way that, say, robots are sexy, but when you can cut the weight, or boost the strength, of aircraft, or spacecraft, or even automobiles by a factor of ten or fifty, the consequences are enormous. Ditto for killing germs, or even detecting them in short order. These sorts of things aren't as exciting as true molecular manufacturing, and they're not as revolutionary, but they're still awfully important, and awfully revolutionary, by comparison with everything else.

When I gave my talk at the Science Advisory Board, I divided nanotechnology into these categories:

• Fake: where it's basically a marketing term, as with nanopants
• Simple: high-strength materials, sensors, coatings, etc., -- things that are important, but not sexy
• Major: advanced devices short of true assemblers
• Spooky: assemblers and related technology (true molecular nanotechnology, capable of making most anything from sunlight and dirt, creating supercomputers smaller than a sugar cube, etc.)

I noted that only in the final category did serious ethical or regulatory issues appear, and also noted that the recent flood of "it's impossible" claims relating to "spooky" nanotechnology seem to have more to do with fear of ethical or regulatory scrutiny than anything else. People in the industry are hoping to keep the critics away with a smokescreen of doubt as to the capabilities of the technology. That probably won't work, especially as nanotechnology develops and is put to use in more and more ways.

Up to now, talk of nanotechnology has generally involved either the "fake" variety (stain-resistant pants) or the "spooky" variety (full-scale molecular nanotechnology with all it implies). But as what might be called midlevel nanotechnology -- neither fake nor spooky -- begins to be deployed, it's likely to have a substantial effect on the nature of the debate. It's one thing to worry about (fictitious) swarms of predatory nanobots, a la Michael Crichton's novel Prey. It's another to talk about nanotech bans or moratoria when nanotechnology is already at work curing diseases and cleaning up the environment.

LEARNING FROM EXPERIENCE

I think that these positive uses will probably shift the debate away from the nano-Luddites. But, on the other hand, as nanotechnology becomes commonplace, serious discussion of its implications may be short-circuited. I think that the nanotech business community is actually hoping for such an outcome, in fact, but I continue to believe that such hopes are shortsighted. Genetically modified foods, for example, came to the market with the same absence of discussion, but the result wasn't so great for the industry. Will nanotechnology be different? Stay tuned. Whatever happens, I think that trying to stand still might well prove the most dangerous course of action.

This may seem surprising, but experience suggests that it's true.

For an academic project I worked on awhile back, I reviewed the history of what used to be called "recombinant DNA research" and is now generally just called genetic engineering or biotechnology. Back in the late 1960s and early 1970s, this was very controversial stuff, with opponents raising a variety of frightening possibilities.

Not all the fears were irrational. We didn't know very much about how such things worked, and it was possible to imagine scary scenarios that at least seemed plausible. Indeed, such plausible fears led scientists in the field to get together, twice, holding conferences at Asilomar in California, to propose guidelines that would ensure the safety of recombinant DNA research until more was known.

Those voluntary guidelines became the basis for government regulations, regulations that work so well that researchers often voluntarily submit their work to government review even when the law doesn't require it -- and standard DNA licensing agreements often even call for such submission. Self-policing was their key element, and it worked.

When the DNA research debate first started, scientific critics such as Erwin Chargaff met the notion of scientific self-regulation with skepticism. Chargaff predicted modern-day Frankensteins or "little biological monsters" and compared the notion of scientific self-regulation to that of "incendiaries forming their own fire brigade." Such critics warned that the harms that might result from permitting such research were literally incalculable, and thus it should not be allowed.

Others took a different view. Physicist Freeman Dyson, who admitted that (as a physicist, not a biologist) he had no personal stake in the debate, noted, "The real benefit to humanity from recombinant DNA will probably be the one no one has dreamed of. Our ignorance lies equally on both arms of the balance. The public costs of saying no to further development may in the end be far greater than the costs of saying yes." Harvard's Matthew Meselson agreed. The risk of not going forward, he argued, was the risk of being left open to "forthcoming catastrophes," in the form of starvation (which could be addressed by crop biotechnology) and the spread of new viruses. Critics like Chargaff poohpoohed this view, saying that the promise of the new technology to alleviate such problems was unproven. [9]

Meselson and Dyson have been vindicated. Indeed, Meselson's comments about "forthcoming catastrophes" were made (though no one knew it at the time) just as AIDS was beginning to spread around the world. Without the tools developed through biotechnology and genetic engineering, the Human Immunodeficiency Virus could not even have been identified, and treatment efforts would have been limited. Had we listened to the critics, in other words, it's likely that many more people would have died. Meanwhile, the critics' Frankensteinian fears have not come true, and the research that was feared then has become commonplace, as this excerpt from John Hockenberry's DNA Files program on NPR illustrates:

Hockenberry: In those early days [Arthur] Caplan says people were concerned about what would happen if we tried to genetically engineer different bacteria.

Caplan: The mayor of Cambridge, Massachusetts, at one point said he was worried if there were scientific institutions in his town that were doing this, he didn't want to see sort of Frankenstein-type microbes coming out of the sewers.

Hockenberry: Today those early concerns seem almost quaint. Now even high school biology classes like this one in Maine do the same gene combining experiments that once struck fear into the hearts of public officials and private citizens. [10]


This experience suggests that we need to pay close attention to the downsides of limiting scientific research, and that we need to scrutinize the claims of fearmongering critics every bit as carefully as the claims of optimistic boosters. This is especially true at the moment, because, arguably, we're in a window of vulnerability where many technologies are concerned. For example, in 2002 researchers at SUNY-Stony Brook synthesized a virus using a commercial protein synthesizer and a genetic map downloaded from the Internet. This wasn't really news from a technical standpoint (I remember a scientist telling me in 1999 that anyone with a protein synthesizer and a computer could do such a thing), but many found it troubling. [11]

But at the moment, it's troubling because we know more about viruses than about their cures, meaning that it's easier to cause trouble by making viruses than it is to remedy viruses once made. In another decade or two, depending on the pace of research, developing a vaccine or cure will be just as easy. That being the case, doesn't it make sense to progress as rapidly as possible, to minimize the timespan in which we're at risk? It does to me.

Critics of biotechnology feel otherwise. But their track record hasn't been very impressive so far. What's more interesting is who's not criticizing nanotechnology. Typically Luddite Greenpeace, for instance, has been surprisingly moderate in its response. The environmental organization has sponsored a report entitled "Future Technologies, Today's Choices: Nanotechnology, Artificial Intelligence and Robotics; A Technical, Political and Institutional Map of Emerging Technologies" [12] that looks rather extensively at nanotechnology.

Surprisingly, the report rejects the idea of a moratorium on nanotechnology, despite calls to squelch nanotech from other environmental groups. Instead, it finds that a moratorium on nanotechnology research "seems both unpractical and probably damaging at present." [13] The report also echoes warnings from others that such a moratorium might simply drive nanotechnology research underground.

Though overlooked in the few news stories to cover the report, this finding is significant. With a moratorium taken off the table, the question then becomes one of how, not whether, to develop nanotechnology.

The report also takes a rather balanced view of the technology's prospects. It notes that there has been a tendency to blur the distinction between nanoscale technologies of limited long-term importance (e.g., stain-resistant "nanopants") and build-anything general assembler devices and other sophisticated nanotechnologies, so as to make incremental work look sexier than it is. This is important: the report's not-entirely-unreasonable worries about the dangers of nanomaterials are distinguishable from more science-fictional concerns of the Crichton variety. (Remember, Crichton rhymes with "frighten.") Thus, it will be harder for Greenpeace to conflate the two kinds of concerns itself, as has been done in the struggle against genetically modified foods where opponents have often mixed minor-but-proven threats with major-but-bogus ones in a rather promiscuous fashion.

Indeed, it seems to me that nano-blogger Howard Lovy is right in saying, "Take out the code words and phrases that are tailored to Greenpeace's audience, and you'll find some sound advice in there for the nanotech industry." [14] Greenpeace is calling for more research into safety. Now is a good time to do that -- even for the industry, which currently doesn't have a lot of products at risk. Quite a few responsible nanotechnology researchers are calling for this kind of research as well. Such research is likely to do more good than harm at blocking Luddite efforts to turn nanotechnology into a political football -- the next Genetically Modified Organism (GMO) derived food. Despite the vast promise of GMO foods (including vitamin-enhanced "golden rice" that can prevent widespread blindness among Third-World children), environmentalist hostility and fearmongering has kept most of them out of the market. As Rice University researcher Vicki Colvin noted in congressional testimony:

The campaign against GMOs was successful despite the lack of sound scientific data demonstrating a threat to society. In fact, I argue that the lack of sufficient public scientific data on GMOs, whether positive or negative, was a controlling factor in the industry's fall from favor. The failure of the industry to produce and share information with public stakeholders left it ill-equipped to respond to GMO detractors. This industry went, in essence, from "wow" to "yuck" to "bankrupt." There is a powerful lesson here for nanotechnology. [15]


She's right, and the nanotechnology industry would do well to learn from the failings she outlines. As I noted above, some companies and researchers have tended to dismiss the prospects for advanced nanotechnology in the hopes of avoiding the attention of environmental activists. That obviously isn't working. The best defense against nano-critics is good, solid scientific information, not denial -- especially given the strong promise of nanotechnology in terms of environmental improvement.

Nanotechnology legislation recently passed by Congress calls for some investigation into these issues of safety and ethics. I hope that there will be more emphasis on exploring both the scientific and the ethical issues involved in nanotechnology's growth. That sort of exploration -- done by serious people, not the charlatans and fearmongers who are sure to target the area regardless -- will be important in making nanotechnology succeed.

The critics won't shut up, of course, but some aspects of their criticism will have more weight than others, leaving the scaremongering less influential than the scaremongers hope. And if that's not enough, the argument for nanotechnology's role in maintaining military supremacy is likely to rear its head. Nanotechnology is likely to be as important in the twenty-first century as rocketry or nuclear physics were in the twentieth. The United States has a fairly competent nanotechnology research program, though many feel its efforts are misdirected. Europe has a substantial but comparatively muted one. Other countries seem very interested indeed.

In the United States, and especially in Europe, research into nanotechnology is facing growing resistance from the same forces that have opposed biotechnology -- and, for that matter, nuclear energy and other new technologies. The claim is that concerns about the safety and morality of nanotechnology justify limitations on research and development. Even Prince Charles has weighed in against nanotechnology, although Ian Bell wonders if the real fuss is about something other than the science:

Charles is afraid that the science could, yes, run amok, with minuscule robots reproducing themselves and proceeding to turn the world into "grey goo."

Many might suspect that the only grey goo we have to worry about is between the ears of HRH, but scientists fear that the prince could do to them what he did to the reputation of contemporary architecture. Charles, clearly, can have no way of knowing what he is talking about, but the fear he expresses is common: do any of us really know what we are doing when we follow where science leads? [16]


The real problem isn't a distrust of science. It's a distrust of people. Such fear is strongest when pessimism about humanity is at a high. Europe, perhaps understandably pessimistic about humanity's prospects in light of recent history, leads the way in throwing some people's only favored invention -- the wet blanket -- over nanotechnology research.

In the more-optimistic United States, concerns exist, but they haven't yet led to a strong interest in regulating nanotechnology. Instead, the U.S. takes an ostrich-like approach to dealing with the realities of the technology; scientific and corporate types try to shift the focus to short-term technological developments while scoffing at the prospects for genuine molecular manufacturing -- the "spooky" stuff, as I've labeled it. Some promising developments are taking place, both at the National Nanotechnology Initiative and within the nanotechnology industry itself, but it's still too early to tell whether this turnaround will really take hold.

MANDARINS AND MEMORIES

In the meantime, other cultures, unencumbered by the residual belief in original sin plaguing even the most secular Westerners, show far less reluctance. Perhaps they are less comfortable and more ambitious than we are, as well. Chinese interest in military nanotechnology has begun to alarm some, especially as China is already third in the world in nanotechnology patent applications. [17]

India's president, Abdul Kalam, is also touting nanotechnology, and as a recent press account captured, he's quite straightforward in saying that one reason for treating nanotechnology as important is that it will lead to revolutionary weaponry:

[Kalam] said carbon nano tubes and its composites would give rise to super strong, smart and intelligent structures in the field of material science and this in turn could lead to new production of nano robots with new types of explosives and sensors for air, land and space systems. "This would revolutionise the total concepts of future warfare," he said. [18]


Yes, it would. Westerners tend to forget it, but it was a few key technologies -- primarily steam navigation and repeating firearms -- that made the era of Western colonialism possible. (See Daniel Headrick's The Tools of Empire [19] for more on this.)

It is, no doubt, as hard for American and European Mandarins to imagine being conquered by Chinese troops equipped with superior weaponry as it was for Chinese Mandarins to imagine the reverse two hundred years ago. Will our mandarins be smart enough to learn from that experience? That's the question, isn't it?

But in the long run, the growth of nanotechnology means that we won't just be worrying about countries, but about individuals. With mature nanotechnology, individuals and small groups will possess powers once available only to nation-states. As with all powers possessed by individuals, these will sometimes be used for good, and sometimes for ill.

Of course, that's just an extension of existing phenomena. My own neighborhood has a few dozen families in it; between them, they probably have enough guns and motorized vehicles (conveniently, mostly SUVs) to wipe out a Roman legion, or a Mongol horde -- forces that, in both cases, once represented the peak of military power on the planet. Nobody worries about the military power that my neighborhood represents, because it's (1) unlikely to be misused, and (2) negligible in a world where most anyone can afford guns and SUVs anyway.

What this suggests is that a world in which nanotechnology is ubiquitous is likely to be less threatening than one in which it's a closely held government monopoly. A world in which nanotechnology is ubiquitous is a rich world. That doesn't preclude bad behavior, but it helps. A world with such diffuse power makes abuse by smaller groups, or even governments, less threatening overall. The average Roman or Mongolian citizen didn't really need guns or SUVs. Back then, the hobbyist machine shop in my neighbor's basement would have been a tool of strategic, even world-changing, importance all by itself Now, in a different world, it's just a toy, even though it could, in theory, produce dangerous weaponry. It's probably best if nano-technology works out the same way, with diffusion minimizing the risk that anyone will gain disproportionate power over the rest of us.

In his recent book, The Singularity Is Near, Ray Kurzweil notes that technology often suffices to deal with technological threats, even in the absence of governmental intervention:

When [the computer virus] first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer-network medium in which they live. Yet the "immune system" that has evolved in response to this challenge has been largely effective. Although destructive self-replicating software entities do cause damage from time to time, the injury is but a small fraction of the benefit we receive from the computers and communications links that harbor them. [20]


Software viruses, of course, aren't usually a lethal threat. But Kurzweil notes that this cuts both ways:

The fact that computer viruses are not usually deadly to humans only means that more people are willing to create and release them. The vast majority of software-virus authors would not release viruses if they thought they would kill people. It also means that our response to the danger is that much less intense. Conversely, when it comes to self-replicating entities that are potentially lethal on a large scale, our response on all levels will be vastly more serious. [21]


I think that's right. In fact, prophetic works of science fiction -- Neal Stephenson's The Diamond Age, for instance -- generally feature such defensive technologies against rogue nanotechnology. Given the greater threat potential of nanotechnologies, we may have to rely on more than Symantec and McAfee for protection -- but on the other hand, given the huge benefits promised by nanotechnology, we should be willing to go ahead anyway. And I expect we will.
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sun Nov 03, 2013 2:37 am

10: LIVE LONG -- AND PROSPER!

One of the ways in which technology is empowering ordinary people involves helping us to live longer. Aristocrats always had much longer and healthier lives than the common folks, who were (more or less literally) plagued with disadvantages such as poor nutrition, unhealthy living conditions, and overwork. There's no longer such a huge discrepancy among classes. As historian Robert Fogel notes:

[T]he life expectancy of the [British] lower classes increased from 41 years at birth in 1875 to about 74 years today, while the life expectancy of the elite increased from 58 years at birth to about 78 years. That is a remarkable improvement. Indeed, there was more than twice as much increase in life expectancies during the past century as there was during the previous 200,000 years. If anything sets the twentieth century apart from the past, it is this huge increase in the longevity of the lower classes. [1]


Now, however, technology seems likely to extend life expectancy even more -- decades or centuries more, while featuring vastly better health and vigor in the bargain. That seems like terrific news to me, but not everyone is so sure.

There is now some reason to think that life spans may become considerably longer in the not-too-distant future. Experiments with rats, fruit flies, and worms have demonstrated that relatively simple modifications ranging from caloric restriction to changes in single genes can produce dramatic increases in life span. So far, these haven't been demonstrated in human beings (whose long life spans make us harder to work with than fruit flies, for whom a doubling only lengthens the experiment for a few days), but many researchers believe that such improvements are feasible.

At the moment, both dietary and genetic approaches to increasing longevity have proved successful. As Richard Miller writes, "In the past two decades, biogerontologists have established that the pace of aging can be decelerated routinely in mammals by dietary or genetic means. . . . There is now . . . incontrovertible evidence, from many fronts that aging in mammals can be decelerated, and that it is not too hard to do this." [2]

Caloric restriction is probably the better-established of the two approaches. Animals fed diets that contain all necessary nutrients, but that provide substantially fewer calories than normal diets (and I mean substantially, as in 40-60 percent fewer), seem to lead longer and healthier lives:


Caloric restriction prolongs the life span by several different, but interrelated, mechanisms that attenuate oxidative stress. . . . The fundamental observation is that dietary restriction reduces damage to cellular macromolecules such as proteins, lipids, and nucleic acids.... Caloric restriction leads to reduction in cellular oxidants such as hydrogen peroxide and increases the activity of endogenous antioxidant enzymes. [3]

In fact, animals on reduced-calorie diets are healthier, not simply longer-lived:

Importantly, the CR diet does not merely postpone diseases and death; it seems to decelerate aging per se and in so doing retards age-related changes in (nearly) every system and cell type examined.... Calorie-restricted rodents remain healthy and active at ages at which control littermates have long since all died. . .. Autopsy studies of CR animals at the end of their life span typically show very low levels of arthritic, neoplastic, and degenerative change, and in functional tests of immunity, memory, muscle strength, and the like, they typically resemble much younger animals of the control group. [4]


No struldbrugs these. [5]

Some humans are experimenting with caloric restriction, but it does not seem likely to appeal to most people, as it may promise a long life -- but a hungry one. Still, it's promising for two reasons. Most obviously it indicates that the aging process -- often regarded with almost supernatural awe -- is in fact susceptible to change through rather simple and crude interventions. Additionally, it seems likely that many of the processes impacted by caloric restriction can be artificially induced by other means. [6]

Genetics also seems to offer hope. Some species are notably longer-lived than others, and it turns out that those long-lived species tend to have many genetic characteristics in common. While one might, via a sufficiently long-term breeding program, produce long-lived humans without any external interventions, doing so would take many generations. And inserting new or modified genes in human beings, though likely possible in time, is difficult and poses significant political problems.

Scientists are, however, already researching drugs that activate or deactivate existing genes in order to retard aging:

Once these two longevity extension mechanisms were identified, many scientists independently tried to develop pharmaceutical interventions by feeding various drugs suspected of regulating these two processes to their laboratory animals. Six of these experiments have shown various signs of success. Although these independent experimenters used different intervention strategies and administered different molecules to their laboratory animals, they each recorded significant increases in the animals' health span and/or a significant extension of the animals' functional abilities .... The pharmaceutical extension of longevity via a delayed onset of senescence has been proved in principle by these six experiments despite their individual limitations. [7]


Biogerontologists like Cambridge University's Aubrey de Grey are looking at far more dramatic interventions that would not merely slow the aging process, but stop or even reverse it, through eliminating DNA damage, replacing senescent cells with new ones, and so on. [8]

But won't we wind up with lots of sick old people to look after? No. In fact, we're actually likely to see fewer people in nursing homes even though we'll have many more old people around. That's because (as in the experiments above) people will be younger, in terms of both health and ability, for their ages. And, after all, who would buy a treatment that just promised to extend your drooling years?

Government programs and pharmaceutical company researchers will likely aim at producing treatments resulting in healthy and vigorous oldsters, not struldbrugs, and it seems even more likely that people will be willing to pay for, and undergo, treatments that promote youth and vigor, but not treatments that simply prolong old age. Today's approach of incremental, one-disease- at-a-time medical research does nothing to help old people in terrible condition, still around simply because they're not quite sick enough to die yet. Genuine aging research is likely to produce a different outcome, restoring youth and health. If it can produce treatments or medications that let people enjoy a longer health span -- more youth, or at least more middle age, by several decades -- then those treatments will sell. If not, then there won't likely be much of a market for treatments that merely extend the worst part of old age.

LADIES, MEET DON JUAN SR.

Thus, if we can expect anything, we can expect treatments that give us more of the good part of our lives -- anywhere from a couple of extra decades to, at the most optimistic end, several extra centuries. And who could be against that?

Well, Leon Kass, the chair of the White House Bioethics Council during President Bush's first term and into the second, for one. "Is it really true that longer life for individuals is an unqualified good?" asks Kass, tossing around similar questions. "If the human life span were increased even by only twenty years, would the pleasures of life increase proportionately? Would professional tennis players really enjoy playing 25 percent more games of tennis? Would the Don Juans of our world feel better for having seduced 1250 women rather than 1000?" [9]

To me, it seems obvious that the answer to all these questions is yes. To Kass, it would seem, the answer is obviously no. But as it happens, we've conducted an experiment along these lines already, and the outcome is not in Kass's favor.

Life spans, after all, have been getting steadily longer since the turn of the twentieth century. According to the Centers for Disease Control, "Since 1900, the average life span of persons in the United States has lengthened by greater than thirty years." [10] That's an average, of course. Nonetheless, there are a lot more old people than there used to be, and they're working longer. Indeed, as Discover magazine has observed, ''A century ago, most Americans lived to be about fifty. Today people over a hundred make up the fastest-growing segment of the population." [11] You can argue about the details, but it's clear that typical adults are living longer than at any time in human history.

So we've already tested out an extra twenty years of healthy life, more or less. And yet people -- far from being bored, as Kass suggests they should be -- seem quite anxious to live longer, play more tennis, have more sex, and so on. The market is proof of that: although it possesses little scientific basis at the moment, so-called "anti-aging medicine" is a rapidly growing field -- rapidly growing enough, in fact, that biogerontologists fear it will give legitimate research in the field a bad name. [12] (That's proof that there's demand out there, anyway.) Nor does one hear of many otherwise healthy people who are anxious to die, even at advanced ages, out of sheer boredom. Instead, they seem eager to improve their lives, particularly their sex lives, as the booming sales of drugs like Viagra and Cialis indicate.

One might argue -- and in fact bioethicist Daniel Callahan does argue -- that these desires are selfish and will be satisfied at the expense of society as a whole. [13] All of those perpetually young oldsters, after all, will refuse to retire, and society will stagnate. [14]

That sounds plausible. But greater life expectancy is not the only recent achievement: the past hundred years have also been the most creative and dynamic period in human history. And our institutions certainly aren't controlled by a rigid gerontocracy. (In fact, one finds rigid gerontocracies mostly in communist countries -- the former Soviet Union, the current People's Republic of China -- not in capitalist democracies. So those who fear gerontocracy might do better by opposing communism than aging research.)

At any rate, I'm not too worried. The tendency in America seems to be toward more turnover, not less, in major institutions, even as life spans grow. CEOs don't last nearly as long as they did a few decades ago. University presidents (as my own institution can attest) also seem to have much shorter tenures. Second and third careers (often following voluntary or involuntary early retirements) are common now. As a professor, I see an increasing number of older students entering law school for a variety of reasons, and despite the alleged permanence of faculty jobs, more than half of my law faculty has turned over, in the absence of mandatory retirement, in the fifteen years that I have been teaching. And we've seen all of this in spite of longer lives, and in spite of the abolition of mandatory retirement ages by statute over a decade ago. [15] This is more dynamism, not less.

To his credit, Callahan says that he doesn't want to ban life-extension research or treatment: "I would not want to prohibit the research. I want to stigmatize it. I want to make it look like you are being an utterly irresponsible citizen if you would sort of dump this radical life extension on the rest of us, as if you expect your friends and neighbors to pay for your Social Security at age 125, your Medicare at 145." [16]

He's wise not to suggest a ban. It seems likely that such a ban on life-extension research or treatments would be unconstitutional, in light of the rights to privacy, medical treatment, and free speech established in a number of Supreme Court opinions. As a result of cases like Lawrence v. Texas [17] or Griswold v. Connecticut [18] that establish people's right to control their own bodies, and to pursue courses of medical care that they see as life-enhancing without moralistic interference from the state, such a ban would likely fail. (Would it make a difference that the Supremes tend to be rather long in the tooth? Maybe.)

It seems even more likely, however, that such a ban would be unpopular (and surely even the most hardened supporter of Social Security and Medicare would blanch at the claim that those programs create a moral obligation to die early on the part of their recipients). Nor does it seem likely that if life were extended to such lengths people would want to retire early and collect Medicare.

WHY RETIRE?

Today's notion of "retirement age" is a fairly recent one. Otto von Bismarck is often credited with craftily setting the retirement age at sixty-five because most people wouldn't live that long -- though in fact, Bismarck set it at seventy and it wasn't lowered to sixty-five until later. [19] But the justification for retirement has always been that by retirement age people were nearly used up and deserved a bit of fun and then a comfortable and dignified decline until death. Get rid of the decline and death, and you've given up the justification for subsisting -- as Social Security recipients, at least, do -- off other people's efforts on what amounts to a form of welfare. Logically, retirement should be put off until people are physically or mentally infirm (and perhaps retirement should just be replaced entirely with disability insurance). Those who are able to work should do so, while those desirous of not working should save up as for a long vacation. Alan Greenspan -- the very model of combined productivity and longevity -- has argued repeatedly for extending retirement ages in tandem with increasing life expectancies, and it is possible that in some non-election year his advice may be followed. [20]

In this regard, increased longevity, with (at the very least) much higher retirement ages, could be the salvation of many nations' pension systems, which to varying degrees are facing an actuarial disaster already as the result of longer life spans and lower retirement ages, coupled with lowered birthrates. [21]

Indeed, although many people worry that longer life spans will lead to overpopulation, the world is now facing what Phillip Longman, writing in Foreign Affairs, calls a "global baby bust." [22] Longer lives and later retirements will help offset at least some of the consequences of falling birthrates -- and people who expect to live longer might be more willing to take time out to bear and raise children, without feeling that it's such a career sacrifice to do so.

But what's surprising to me is that so many people see the idea of living longer as controversial, even morally suspect. Part of this, I suspect, has to do with the usual skepticism regarding the new. Ron Bailey notes:

As history demonstrates, the public's immediate "yuck" reaction to new technologies is a very fallible and highly changeable guide to moral choices or biomedical policy. For example, in 1969 a Harris poll found that a majority of Americans believed that producing test-tube babies was "against God's will." However, less than a decade later, in 1978, more than half of Americans said that they would use in vitro fertilization if they were married and couldn't have a baby any other way. [23]


In fact, as Bailey also notes, many of those who oppose longer lives -- including, for example, Leon Kass -- previously opposed in vitro fertilization too. [24] They tend not to bring that subject up on their own now, though. And that's not all:

New medical technologies have often been opposed on allegedly moral and religious grounds. For centuries autopsies were prohibited as sinful. People rioted against smallpox vaccinations and opposed the pasteurization of milk. Others wanted to ban anesthesia for childbirth because the Bible declared that after the Fall, God told Eve, "In sorrow thou shalt bring forth children" (Gen. 3: 16). [25]


I suspect that people's inherent suspicion of longer life will fade too. In mythology, longer life is always offered by a supernatural force, with a hidden and horrible catch somewhere: you have to surrender your soul, or drink the blood of virgins, or live forever while growing more feeble. Or, in real life as opposed to mythology, such a prize was offered to the desperate and gullible by charlatans who couldn't deliver on their promises anyway.

Of course, cures for baldness and impotence used to be the domain of charlatans too (though they got less attention from evil deities). Now they're cured by products available in pharmacies, sometimes without a prescription. People may joke about Viagra or Rogaine, but they don't fear them. I suspect that's how treatments for extending our lives will come to be seen -- unless those who oppose them manage to get them outlawed now, while they can still capitalize on people's inchoate fears. I doubt they'll succeed. But I'm sure they'll try.

AFTERWORD: AN INTERVIEW ON IMMORTALITY

Aubrey de Grey is a biogerontologist at Cambridge University in England, whose research on longevity -- via an approach known as "Scientifically Engineered Negligible Senescence" -- has gotten a great deal of attention. I think that this subject is on the technological (and political) cusp, and that we'll be hearing more about it, so I interviewed him (via email).

Reynolds: What reasons are there to be optimistic about efforts to slow or stop aging?

de Grey: The main reason to be optimistic is in two parts: First, we can be pretty sure we've identified all the things we need to fix in order to prevent -- and even reverse -- aging, and second, we have either actual therapies or else at least feasible proposals for therapies to repair each of those things (not completely, but thoroughly enough to keep us going until we can fix them better). The confidence that we know everything we need to fix comes most persuasively from the fact that we haven't identified anything new for over twenty years.

Reynolds: What do you think is a reasonable expectation of progress in this department over the next twenty to thirty years?

de Grey: I think we have a 50/50 chance of effectively completely curing aging by then. I should explain that I mean something precise by the suspiciously vague-sounding term "effectively completely." I define an effectively complete cure for aging as the attainment of "escape velocity" in the postponement of aging, which is the point when we're postponing aging for middle-aged people faster than time is passing.

This is a slightly tricky concept, so I'll explain it in more detail. At the moment, a fifty-year-old has roughly a 10 percent greater chance of dying within the next year than a forty-nine-year-old, and a fifty-one-year-old has a 10 percent greater chance than a fifty-year-old, and so on up to at least eighty-five to ninety (after which more complicated things happen). But medical progress means that those actual probabilities are coming down with time. So, since we're fifty only a year after being forty-nine, and so on, each of us has less than a 10 percent greater chance of dying at fifty than at forty-nine-it's 10 percent minus the amount that medical progress has achieved for fifty-year-olds in the year that we were forty-nine. Thus, if we get to the point where we're bringing down the risk of death at each age faster than 10 percent per year, people will be enjoying a progressively diminishing risk of death in the next year (or, equivalently, a progressively increasing remaining life expectancy) as time passes. That's what I call "escape velocity," and I think it's fair to call it the point where aging is effectively cured.

Reynolds: What sort of research do you think we should be doing that we're not doing now?

de Grey: Well, there are several approaches to curing aspects of aging that I think are very promising, but which most people seem to think are too hard to be worth trying. One is to obviate mitochondrial mutations, by putting suitably modified copies of the thirteen mitochondrial protein-coding genes into the nucleus. This is hard -- some of those suitable modifications are hard to identify -- but it's definitely feasible. A second one is to find enzymes in bacteria or fungi that can break down stuff that our cells accumulate because they can't break it down, like oxidized cholesterol. The idea here is to put such genes into our cells with gene therapy, thereby enabling them to break the stuff down. If we could do that, it would virtually eliminate strokes and heart attacks; and similar approaches could cure all neurodegenerative diseases and also macular degeneration, the main cause of age-related blindness. A third one is to look for chemicals or enzymes that can cut sugar-induced cross-links (advanced glycation end products). One such compound is known, but it only breaks one class of such links so we need more, and no one is really looking. And maybe the biggest of all is to cure cancer properly, by deleting our telomere-maintenance genes and thereby stopping cancer cells from dividing indefinitely even after they've accumulated lots and lots of mutations.

Reynolds: Some people regard aging research, and efforts to extend life span, with suspicion. Why do you think that is? What is your response to those concerns?

de Grey: I think it's because people don't think extending healthy life span a lot will be possible for centuries. Once they realize that we may be able to reach escape velocity within twenty to thirty years, all these silly reasons people currently present for why it's not a good idea will evaporate overnight. People don't want to think seriously about it yet, for fear of getting their hopes up and having them dashed, and that's all that's holding us back. Because of this, my universal response to all the arguments against curing is simple: don't tell me it'll cause us problems, tell me that it'll cause us problems so severe that it's preferable to sit back and send 100,000 people to their deaths every single day, forever. If you can't make a case that the problems outweigh 100,000 deaths a day, don't waste my time.

Reynolds: What are some arguments in favor of life extension?

de Grey: I only have one, really: It'll save 100,000 lives a day. People sometimes say no, this is not saving lives, it's extending lives, but when I ask what the difference is, exactly, no one has yet been able to tell me. Saying that extending old people's lives is not so important as extending young people's lives may be justified today, when older people have less potential life to live (in terms of both quantity and quality) than younger people, but when that difference is seen to be removable (by curing aging), one would have to argue that older people matter less because they have a longer past, even though their potential future is no different from that of younger people. That's ageism in its starkest form, and we've learned to put aside such foolish things as ageism in the rest of society; it's time to do it in the biomedical realm too.

Reynolds: Do you see signs of an organized political movement in opposition to life extension?

de Grey: No, interestingly. I see people making arguments against it, and certainly some of those people are highly influential (Leon Kass, for example), but really they're just using life extension as a vehicle for reinforcing their opposition to things that the public does realize we might be able to do quite soon if we try. They get the public on their side by exploiting the irrationality about life spans that I've described above, then it's easier to move to other topics.

Reynolds: For that matter, do you see signs of an organized movement in support of such efforts?

de Grey: Oh yes. There are really only isolated organizations so far, but they are increasingly cooperating and synergizing. The older ones, like the cryonics outfits and the Life Extension Foundation, are as strong as ever, and they're being joined by other technophile groups like the Foresight and Extropy Institutes and the World Transhumanist Association, plus more explicitly longevity-centric newcomers such as the Immortality Institute. Quite a few blogs are helping this process along nicely, especially Fight Aging! and Futurepundit, and I really appreciate that you're now among them. And of course there's the organization that I cofounded with David Gobel a couple of years ago, the Methuselah Foundation, which funds some of my work through donations but whose main activity is to administer the Methuselah Mouse Prize. [A prize of over $1 million for extending the life of laboratory mice beyond the present record.]

Reynolds: What might life be like for people with a life expectancy of 150 years?

de Grey: Well, we won't have a 150-year life expectancy for very long at all -- we'll race past every so-called "life expectancy" number as fast as we approach it, as outlined above. So maybe I should give an answer to the analogous question regarding indefinite life spans. Life will be very much the same as now, in my view, except without the frail people. People will retire, but not permanently -- only until they need a job again. Adult education will be enormously increased, because education is what makes life never get boring. There will be progressively fewer children around, but we'll get used to that just as easily as we got used to wearing these absurd rubber contraptions whenever we have sex just in order to avoid having too many kids once infant mortality wasn't culling them any more. Another important difference, I'm convinced, is that there will be much less violence, whether it be warfare or serious crime, because life will be much more valued when it's so much more under our control.

Reynolds: What is your response to concerns that life extension therapies might be too expensive for anyone but the rich?

de Grey: This is a very legitimate concern, which society will have to fix as soon as possible. Since 9/11 we all know how bad an idea it is to make a lot of people really angry for a long time -- if the tip of that anger iceberg is willing to sacrifice everything, lots of other people lose everything too. Since rich people will be paying for rejuvenation therapies as a way to live longer, not as a way to get blown up by poor people, everyone will work really hard to make these treatments as cheap as possible as soon as possible. That'll be a lot easier with a bit of forward-planning, though -- e.g., an investment in training a currently unnecessary-looking number of medical professionals. But one way or another, these treatments will definitely become universally available in the end, probably only a few years after they become available at all, even though the cost of doing this will be staggering. The only way to have a sense of proportion about this period is to remember that it'll be the last chapter in what we can definitely call the War on Aging -- people worldwide will readily make the same son of sacrifices that they make in wartime, in order to end the slaughter as soon as possible.

Reynolds: Leon Kass has suggested various items of literature as cautionary tales. What literary or science fiction stories might you recommend for people interested in this subject?

de Grey: I used to give the obvious answer to this -- [Robert A.] Heinlein. But now I have a new answer. Nick Bostrom, a philosopher at Oxford University here in the UK, has written a "fairy tale" about a dragon that eats 100,000 people a day and its eventual slaying. It's been published in the Journal of Medical Ethics, but it's also online in a great many places, including his website [http://www.nickbostfom.com]. It's absolutely brilliant.
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sun Nov 03, 2013 3:07 am

PART 1 OF 2

11: SPACE: IT'S NOT JUST FOR GOVERNMENTS ANYMORE

Life on Earth was a total waste,
I don't care if I'm lost in space,
I'm on a rocket to nowhere!
-- WEBB WILDER [1]


Webb Wilder wrote these words to describe the drawbacks of a swinging-single lifestyle, but they apply all too well to America's decidedly non-swinging space program. The old government-based approach hasn't done very well, but fortunately some smaller players, empowered by technology and competition, are stepping up to the plate. They may be just in time.

The aerospace industry as a whole is in trouble. Even in the aviation sector, there are too few companies for significant competition, and only one major company -- Boeing -- is really competitive in the civilian market. Today's airliners are modest improvements over the 707s that ruled the skies when I was born, but there's nothing on the drawing boards that will be much better.

Ditto for the space sector, only more so. Oh, it's not all bad: the civilian commercial space industry has been booming in terms of revenue. There's actually more commercial money spent on things like communications satellites and Earth observation than on government space programs, these days. But the technology of getting into space hasn't progressed much since the 1960s, industry concentration is even worse, and there's no prospect of any improvement.

Certainly the International Space Station isn't doing much to promote our future in space. Originally designed as a place that would support extensive experimentation, and the on-orbit construction of bigger-crewed structures and interplanetary spacecraft, it has now been pared down so thoroughly that it's little more than a jobs program -- lots of people on the ground supporting three astronauts in orbit who spend most of their time simply doing maintenance. And the balky, expensive space shuttle may actually be a step backward.

NASA has gotten leaner, but not appreciably meaner. It's like the Space Station writ large: Most of what science and technology development goes on there is an afterthought, with the lion's share of the agency's revenue and energy going instead to supervise NASA bureaucrats who produce nothing.

This isn't entirely NASA's fault. At the White House there has been a policy vacuum regarding space programs for a over decade. NASA successfully used inflated cost estimates to kill President George H. W. Bush's 1991 Mars mission plan for fear it would compete with the Space Station. The Clinton administration -- which abolished the National Space Council that used to oversee space policy -- never provided much new guidance beyond Al Gore's lame plan to launch a satellite that would broadcast pictures of Earth via the Internet.

Quite a depressing litany. I could spend another chapter or twenty dwelling on the sordid details. But instead let's address what to do about the dire situation that the interplay of space development and big government has created.

THE BIG ISSUES IN ORBIT

On the governmental front, the first thing we need is some direction at the top. It's virtually impossible to accomplish anything through bureaucracies without strong White House backing. So I suggest reconstituting the National Space Council (traditionally headed by the vice president), whose abolition was opposed by every major space group at the time. (Full disclosure: I was an advisor to the Space Council in 1991-1992.) Once reconstituted, the Space Council should set out to address several major problems:

1. Concentration. There aren't enough firms in the space industry to foster competition, and competition is what gives us expanded capabilities and lower costs. Whether this calls for the Justice Department to pursue a breakup of some of these companies, or whether the government should attempt to foster the growth of startups is unclear, but these options should be considered closely. Neither of these tactics constitutes unwarranted interference with the free market, since what we have now is in essence a cozy, government-supervised cartel anyway.

2. Caution. People in the established space enterprises are afraid to fail. In fact, they're afraid to even try things that might risk failure. A certain amount of caution, of course, is a good thing. But failure is one of the main ways we learn. For instance, the failure of the X-33 single-stage-to-orbit program yielded some important lessons and -- because of the program's comparatively small scale -- didn't produce a serious political backlash. Those lessons could prove useful, but only if a program is in place to take advantage of them. We need to institutionalize learning from failure, something NASA and the aerospace industry did very well in the 1950s, but not so much today. More thoughts on that later.

3. Civilians. The military has caught on to the importance of space to its mission; civilians in the federal government outside NASA (and a disturbing number within NASA) don't feel a comparable urgency. But space isn't a Cold War public-relations arena now. It's essential to economics, military strength, and cultural warfare. Agencies beyond NASA need to get more involved and more supportive: the FCC, for example. More thoughts on that later too.

4. Counting. It isn't sexy, but having a decent accounting system makes a huge difference. In conversations I've had, experts within the government have called NASA's financial management system "abominable." It's not that they don't know where the money goes, so much as that they don't know what they're getting back for it. (This may not be entirely an accident. Government programs seldom encourage that kind of transparency and accountability.) NASA administrator Sean O'Keefe was well positioned by experience to fix this problem during his term, but didn't make enough progress, and the agency has yet to complete this essential first step toward fixing other problems.

5. Cost. Cost is the major barrier to doing things in space. The government is lousy at lowering costs. But it can help promote the technological and economic environments that will allow such things to become feasible on a self-sustaining basis. Sadly, while the federal government has the power to help, it has even greater power to screw things up in this department. NASA needs to rethink its core mission and focus on its original role of developing technologies that enable others to do things, rather than feeling that it must do things itself. NASA needs to see the space tourism industry, for example, as one of its offshoot accomplishments, not, as it sometimes does now, a competitor. It's the free market that lowers costs, and empowering a little friendly competition will do more to promote American supremacy in space than any single R&D program.

In addition, the government needs to do other things to smooth the path: streamlining regulation for commercial space (FAA); protecting radio spectrum needed by space enterprises (FCC); making some sense out of export controls (Commerce and State); and so on. Congress has actually made a start at this, but there's room for much more.

The good news, as Holman Jenkins notes in the Wall Street Journal, is that space advocates have used the Internet to end-run the usual interest groups affecting space policy. In 2004, space activists pressured Congress to pass a space tourism bill, and more recently they've been publicly criticizing NASA's new moon-Mars programs. As Jenkin notes, the old "iron triangle" of government contractors, NASA, and congressional delegations now faces "an effective peanut gallery, their voices magnified by the Web, which has sprouted numerous sites devoted to criticizing and kibitzing about NASA." [2] These grassroots supporters aren't just critiquing government policy, though. They're also working to get things going on their own.

REACHING FOR THE STARS VS. REACHING FOR THE PAPER

The year 2001 is now behind us, but we're a long way from the space stations, lunar bases, and missions to Jupiter that Kubrick and Clarke made so plausible way back when. It's time to get our act together, so that we won't find ourselves in the same straits in 2051. The good news is that some people are doing just that. In fact, private foundations, private companies, and even NASA itself are waking up to some new approaches.

The X-Prize Foundation, organized by space supporters who were frustrated by the slow progress of government programs, decided to resurrect an old surefire motivator: a prize. The X-Prize approach is based on the historic role played by privately funded prizes in developing aviation. (Charles Lindbergh crossed the Atlantic to win the $25,000 Orteig Prize.) Its founders and organizers hope that private initiative, and lean budgets coupled with clear goals, will produce more rapid progress than the government-funded programs organized by space bureaucrats over the past five decades or so. (More full disclosure: I was a pro bono legal advisor to the X-Prize Foundation in its early days.) In particular, the founders are interested in bringing down costs and speeding up launch cycles, so that space travel can benefit from aircraft-type cost efficiencies. And so far it looks as if they're having some success.

The X-Prize Foundation began by offering a $10 million private award for the first team that: "Privately finances, builds & launches a spaceship, able to carry three people to 100 kilometers (62.5 miles); Returns safely to Earth; Repeats the launch with the same ship within 2 weeks." The official prize winner was Butt Rutan's Scaled Composites, with its SpaceShipOne spacecraft. But the fact that twenty-seven competitors, from a number of different countries, competed for the prize indicates that the foundation itself is the real winner. The $10 million prize generated a lot more than $10 million worth of investment.

Which is, of course, the point. Ten million dollars in a government program won't get you much; by the time paper is pushed and overhead is allocated, it may not get you anything. A $10 million prize, however, can attract much more -- with competitors driven as much by prestige as by the chance of making a profit.

Another great benefit is that prize-based programs allow for a lot of failure. By definition, if twenty-seven teams go for the prize, at least twenty-six will fail. And that's okay. Government programs, on the other hand, are afraid of failure. So they are either too conservative, playing it safe so as to avoid being blamed for failure, or too drawn out, dragging on so long that, by the time it's clear they're not going anywhere, everyone responsible has died or retired. (In government, or big corporations, it's okay not to succeed, so long as you aren't seen to fail.)

Since we usually learn more by taking chances and failing than by playing it safe and learning nothing, in the right circumstances a prize program is likely to produce more and faster progress. This isn't by accident. As X-Prize cofounder Peter Diamandis noted in recent congressional testimony:

The results of this competition have been miraculous. For the promise of $10 million, over $50 million has been spent in research, development and testing. And where we might normally have expected one or two paper designs resulting from a typical government procurement, we're seeing dozens of real vehicles being built and tested. This is Darwinian evolution applied to spaceships. Rather than paper competition with selection boards, the winner will be determined by ignition of engines and the flight of humans into space. Best of all, we don't pay a single dollar till the result is achieved. [3]


Bureaucracies are good at some things, but doing new things quickly and cheaply isn't one of them. Foundations like X-Prize offer a different approach. I wonder what other government programs could benefit from this kind of thing?

Actually, NASA is starting some prizes of its own, devoting $400,000 over two years toward competitions aimed at developing some pretty cool technology: wireless power transmission (power-beaming) and high-strength space tethers or "elevators." More competitions are expected to follow. [4] It's not a lot of money, but -- as the X-Prize demonstrated -- you don't need a lot of money to accomplish a lot if you spend it well, something that NASA hasn't done historically. That's the real news here.

Both the tether technology and the power-beaming are important on their own, of course. Space "elevator" technology is rapidly moving out of the realm of science fiction, as progress in material science makes cables strong enough to reach from Earth's surface to a point beyond geosynchronous orbit feasible. At geosynchronous orbit, it takes a satellite twenty-four hours to circle Earth, meaning that a point in geosynchronous orbit remains above the same spot at Earth's equator. A cable (suitably counterweighted) from the surface can thus go straight up to geosynchronous orbit, which conveniently enough is also the most useful orbit for satellites. With such a cable, it becomes possible to reach orbit via electric motors (which themselves can be solar powered by solar cells in space, above earthbound clouds, smog, and atmospheric haze) instead of rockets, making the prospect of cheap spaceflight look much more attainable. And if you can get to space cheaply, you can build big things there cheaply -- instead of expensively and badly, as we do now -- and if you can do that, among the things you can build are solar power satellites that convert the unfiltered twenty-four-hour sunlight of space into electricity to send back to Earth. [5]

So how do you get the power to Earth? Well, you could send it down a cable, if your satellite's at geosynchronous orbit, but you can also beam it, which lets you send power to a much wider variety of terrestrial locations, from a much wider variety of orbits. Hence the relevance of the power-beaming work.

Solar power satellites offer one answer to a question raised by the current wave of enthusiasm for hydrogen-fueled cars: Where will the hydrogen come from? You need electricity to produce hydrogen, and lots of it -- hydrogen is really more like a power-storage system than a fuel -- and if you get that electricity from burning coal or oil you pretty much vitiate the environmental benefits of hydrogen. That's just substituting smokestacks for tailpipes, which is no great improvement. Big nuclear plants are another option, of course, but some people have a problem with those.

What's really revolutionary today aren't these ideas -- people have been talking and thinking about solar power satellites for pretty much my entire lifetime -- but the means by which they are being achieved. Instead of going for a massive Apollo (or worse, space shuttle) sort of program, NASA is attacking these problems incrementally, and it's getting other minds involved. The way the prize program is structured (contestants get to keep their own intellectual property) encourages people to participate, and the goals get more ambitious over time.

What's more, NASA seems to have identified a suite of technologies to be developed by prize-winning competitions that, taken together, look pretty promising where more ambitious projects are concerned:

• aerocapture demonstrations
• micro reentry vehicles
• robotic lunar soft landers
• station-keeping solar sails
• robotic triathlon
• human-robotic analog research campaigns
• autonomous drills
• lunar all-terrain vehicles
• precision landers
• telerobotic construction
• power-storage breakthroughs
• radiation-shield breakthroughs

Put all this stuff together, and you've got the makings of an ambitious space program, with the R&D done on the cheap. Maybe there's hope for NASA yet. Or at least for our future in space.

MORE EGGS IN MORE BASKETS

There had better be, because our future may depend on getting a sizable chunk of humanity into outer space. I attended a conference on terrorism, war, and advanced technology a few years ago, and after hearing about everything from genetically engineered smallpox to military nanoplagues, one of the participants remarked, "This makes space colonization look a lot more urgent than I had thought."

He's not the only one to feel that way. Stephen Hawking says that humanity won't survive the next thousand years unless we colonize space. I think that Hawking is an optimist.

We've seen a certain amount of worry about smallpox, anthrax, and various other bioweapons since 9/11. At the moment, and over the next five or ten years, these worries, while not without basis, are probably exaggerated. At present there aren't any really satisfactory biological weapons. Anthrax is scary, but not capable of wiping out large (that is, crippling) numbers of people. Smallpox, though a very dangerous threat, is hard to come by and easy to vaccinate against, and the populations whose members are the most likely to employ it as a weapon (say, impoverished Islamic countries) are also those most vulnerable to it if, as is almost inevitable, it gets out of hand once used.

That will change, though. Already there are troubling indications that far more dangerous biological weapons are on the horizon, and the technology needed to develop them is steadily becoming cheaper and more available.

That's not all bad -- the spread of such technology will make defenses and antidotes easier to come up with too. But over the long term, by which I mean the next century, not the next millennium, disaster may hold the edge over prevention: a nasty biological agent only has to get out once to devastate humanity, no matter how many times other such agents were contained previously.

Nor is biological warfare the only thing we have to fear. Nuclear weapons are spreading, and there are a number of ways to modify nuclear weapons so as to produce lethal levels of fallout around the globe with surprisingly few of the devices. That's not yet a serious threat, but it will become so within a couple of decades.

More talked about, though probably less of a threat in coming decades, is nanotechnology. Biological weapons are likely to exceed nanotech as a threat for some time, but not forever. Again, within this century misuse of nanotech will be a danger.

Want farther-out scenarios? Private companies are already launching asteroid rendezvous missions. Perhaps in the not-too-distant future, a mission to divert a substantial asteroid from its orbit to strike Earth may be on the to-do list of small, disgruntled nations and death-obsessed terror groups (or perhaps Luddites who believe that smashing humanity back to the Neolithic would be a wonderful thing). Imagine the Unabomber with a space suit and better resources.

No matter. Readers of this book are no doubt sophisticated enough to come up with their own apocalyptic scenarios. The real question is, what are we going to do about it?

In the short term, prevention and defense strategies make sense. But such strategies take you only so far. As Robert Heinlein once said, Earth is too fragile a basket to hold all of our eggs. We need to diversify, to create more baskets. Colonies on the moon, on Mars, in orbit, perhaps on asteroids and beyond would disperse humanity beyond the risk of most catastrophes short of a solar explosion.

Interestingly, spreading human settlement to outer space is already official United States policy. Congress declared it such in the 1988 Space Settlements Act. Congress declared as a national goal "the extension of human life beyond Earth's atmosphere, leading ultimately to the establishment of space settlements," and required periodic reports from NASA on achieving those goals, though NASA has dropped the ball on them. [6] The policy was endorsed again by Presidents Reagan and Bush (the Clinton administration didn't exactly renounce this goal, but didn't emphasize it either). But talk is cheap; not much has been done.

What would a space policy aimed at settling humanity throughout the solar system look like? Not much like the one we've got, unfortunately.

The most important goal of such a policy has to be to lower costs. Doing things in space is expensive -- horribly so. In fact, in many ways it's more expensive than it was in the 1960s. This is no surprise: it's the tendency of government programs to drive up costs over time, and human spaceflight has up to now been an exclusively government-run program.

That's why promoting the commercialization of outer space is so important. Market forces lower costs; government bureaucracies drive them up. Among the cost-lowering programs likely to make the biggest difference is space tourism, which is beginning to look like a viable industry in the short term. Oust ask Dennis Tito, Greg Olsen, or Mark Shuttleworth, all of whom have already bought rides into space on Russian rockets, at a cost of many millions of dollars each). We should be promoting such commercialization any way we can, but especially through regulatory relief and liability protections.

Government programs should be aimed at research and development that will produce breakthroughs in lowering costs: cheaper, more reliable engines; new technologies like laser launch, solid-liquid hybrid rocket engines, space elevators. Once this technology is produced, it should be released to the private sector as quickly as possible.

Other research should aim at long-term problems: fully closed life support systems capable of supporting humans for extended periods (you might think that the International Space Station would provide a platform for this kind of research, but it doesn't); exploration of asteroids, the moon, and Mars with an eye toward discovering resources that are essential for colonization; and so on.

Putting these policies into place would require drastic change at NASA, which is now primarily a trucking-and-hotel company, putting most of its resources into the Space Station and the space shuttle, which now exists mostly to take people to and from the Space Station. But we've been stuck in that particular loop for nearly twenty years. President Bush has pushed a return to the moon and a mission to Mars as top goals, and Congress has recently endorsed them. But so far, actual movement seems small.

It's time for that to change. Like a chick that has grown too big for its egg, we must emerge or die. I prefer the former. Apparently, judging from the new proposals to return to the Moon and send humans to Mars, Congress and the Bush administration feel the same way. But there are some issues to be resolved before we go to Mars.

The first question is, how?

MARS OR BUST!

One well-known proposal that NASA has shown some interest in features Bob Zubrin's Mars Direct mission architecture, which uses mostly proven technology and which promises to be much, much cheaper than earlier plans. Mars Direct involves flying automated fuel factories to Mars in advance of astronauts; the astronauts land to find a fully fueled return vehicle waiting for them. The factories remain behind to make more fuel for future operations.

Zubrin thinks that we could do a Mars mission using this architecture for $30-40 billion -- which, even if you double it, is still manageable. Back when I worked for AI Gore's presidential campaign in 1988, I did a paper on Mars missions that concluded that $80-90 billion (in 1988 dollars, about the cost of the Apollo program) was the maximum feasible expenditure on a Mars mission. This estimate would fall well below that figure. True, we have the war on terrorism to fight now, but in 1988 (and for that matter, during Apollo's development) we had the Cold War.

A more cogent criticism than cost is what we have to show for it when we're done. I'm a fan of Zubrin's approach. But I agree with other critics that the real key to successful space settlement over the long term is to take the work away from governments and turn it over to profit-making businesses -- ordinary people working in market structures that maximize creativity and willingness to take risks. The government has an important early role to play in exploring new territories before they're settled -- it wasn't private enterprise that financed Lewis and Clark, after all -- but government programs aren't much good once the trail-breaking phase has passed. And the earlier commercial participation comes in, the better.

If you want settlement and development, you need to give people an incentive. One possibility, discussed by space enthusiasts for some time, is a property-rights regime modeled on the American West, with land grants for those who actually establish a presence on the moon or Mars. Some have, of course, derided the idea of a "Wild West" approach to space development, but other people like the idea of a "Moon Rush," which I suppose could be expanded in time to a "Mars Rush."

Could our "cowboy" president get behind a Wild West approach to space settlement? He'd be accused of unilateralism, disrespect for other nations, and, of course, of taking a "cowboy approach" to outer space that's sure to infuriate other nations who want to be players but who can't compete along those lines -- like, say, the French. Hmm. When you look at it that way, there doesn't seem to be much doubt about what he'll do, does there?

One reason for optimism is that this time around, cost and technology are getting a lot more thought than when NASA was looking at Mars missions in the 1980s. Nuclear propulsion is at the forefront this time -- back then, it was a political nonstarter. It's possible to go to Mars using chemical rockets alone, but just barely. Using nuclear space propulsion -- where a reactor heats gases to form high-speed exhaust rather than using chemical explosions to do so -- cuts travel times from six months to two, and, because of better specific impulse (efficiency), allows for higher payloads. (There are no plans, as far as I know, to use Orion-style nuclear-explosive propulsion. Should I turn out to be wrong about this, it will probably be a sign that somebody somewhere is very worried about something.)

The United States experimented with nuclear propulsion as part of the Kiwi and Nerva projects in the 1960s and early 1970s. The results were extraordinarily promising, but the projects died because, with the United States already abandoning the moon and giving up on Mars, there was no plausible application for the technology. Nuclear propulsion is mostly useful beyond low-earth orbit, and we were in the process of abandoning everything beyond low-earth orbit.

That appears to be changing, and it's a good thing. I think that the "settlement" part is as important as the "exploration" part. And while exploration is possible based on chemical rockets alone, settlement without using nuclear power will be much more difficult.

Nuclear space propulsion has had its critics and opponents for years, though weirdly their opposition stems largely from fears that it will lead to "nuclear-powered space battle stations." This isn't quite as weird as Rep. Dennis Kucinich's legislation to ban satellite-based "mind control devices," [7] but it seems pretty far down the list of things we should be concerned about. With worries about earthbound nuclear weapons in the hands of Iran, North Korea, and perhaps assorted terrorist groups, it's hard to take seriously claims that possible American military activity in space, spun off from civilian Mars missions, might be our biggest problem. Indeed, the whole concern about "space battle stations" has a faintly musty air about it, redolent of circa-1984 "nuclear freeze" propaganda. Who would we fight in space today? Aliens? And if we needed to do that, wouldn't nuclear-powered space battle stations be a good thing?

Nor are environmental concerns significant. Space nuclear reactors would be launched in a "cold" (and thus safe) state and not powered up until they were safely in orbit. And again, compared with the environmental threat caused by rogue nuclear weapons, their dangers seem minuscule.

The administration's Mars proposal is at least a step in the right direction, and its adoption of nuclear space propulsion indicates more realism than the flags-and-footprints approach favored by the previous Bush administrations. What's more, the use of nuclear propulsion, which makes interplanetary travel both cheaper and faster, greatly increases the likelihood of going beyond flags and footprints to true space settlement. It's about time.

But there are still questions. Imagine that you've got a lot of money. No, more than that. A lot of money. Now imagine that you want to go to Mars. Oh, you already do? Me too. Then imagine that with your money you've built a spaceship -- perhaps along the lines of Zubrin's Mars Direct mission architecture, though for our purposes the details don't matter. If you prefer, you may substitute antigravity or the Mannschenn Drive as your mechanism of choice.

Regardless of technology, you've got a craft that will take you to Mars and back, in one piece, along with sufficient supplies on the outbound leg and some samples when you come back. You're going to find out firsthand what Viking couldn't settle: whether there's life on Mars. You'll also do some research aimed at laying the groundwork for Martian colonization. Are you ready to go?

Not quite. You see, there might be life on Mars.

Well, duh. That's what you're going to find out, isn't it?

Yes. But if you find it's there, then what? You see, the 1967 Outer Space Treaty requires its signatories to conduct explorations of celestial bodies "so as to avoid their harmful contamination and also adverse changes in the environment of the Earth resulting from the introduction of extraterrestrial matter." [8] When you get to Mars you may create the first kind of contamination, and if there's life on Mars, you may create the second-assuming that you plan to return to Earth.

That human explorers will "contaminate" Mars is inevitable -- humans contain oceans of bacteria, and a human presence on Mars is sure to leave some behind. Even if all wastes are bagged and returned to Earth (unlikely because of the expense involved), some germs are bound to escape via air leaks and transport on surfaces of Mars suits and other objects that exit the spacecraft.

NASA now takes extensive steps to sterilize unmanned spacecraft so as to keep Earth germs from reaching other planets, something known in the trade as "forward contamination." Such precautions may be adequate for robotic missions, but it is simply impossible to ensure that missions involving people won't result in contamination. They will.

Given the impossibility of avoiding some sort of "contamination," the treaty obviously and sensibly does not forbid mere "contamination." It prohibits harmful contamination. What does that mean? Well, if Mars is lifeless, harmful contamination can only be contamination that interferes with human purposes. To scientists at the moment, any contamination seems harmful, since it may make it harder for them to determine if Mars has native life when it might have come from Earth. ("Hey, look, Mars has E. Coli! -- er, or some space-probe-manufacturing guy on Earth has poor personal hygiene.") But once humans go to Mars, the framework is likely to change.

If Mars has life of its own (unlikely, but not impossible, especially in light of some intriguing new evidence), the situation gets harder. First, we may have to consider whether Martian bacteria or lichens or whatever may be harmed by any organisms humans bring. Then we have to decide whether we care about that. Is harm to bacteria the sort of harm the Outer Space Treaty was meant to prevent? Almost certainly not, but no doubt bacteria-rights advocates will do their best to get a debate fermenting here on Earth.

Martian bacteria raise another question: the question of "back contamination," as it's called -- contamination of Earth by Martian organisms. That, too, will be difficult to rule out in the event of a manned mission. Oh, it's unlikely that bacteria that can survive in the Martian environment will flourish on Earth, and even less likely that they would prove harmful to Earth life. But unlikely isn't the same as impossible, and people are likely to worry. In fact, they already have worried about it in the context of robotic sample-return missions.

Mars colonization fans -- of whom I am certainly one -- need to ensure that the same questions have been addressed long before any humans set out for Mars. As we've learned in many other contexts, sometimes the environmental impact statement takes longer than the underlying project.

Of course, this may all be much ado about nothing -- as the National Research Council has noted, nontrivial quantities of Martian material have been deposited on Earth as meteorites, blasted loose from Mars by asteroidal impacts, and it is entirely possible that bacteria could have survived the journey. [9] Smaller quantities (because of Earth's greater gravity) of Earth material have presumably gone to Mars in the same fashion. And in the early days of the solar system, when life was new, the greater degree of celestial bombardment on both planets would have made such exchanges far more frequent. So if we find life on Mars, it may simply be Earth life that has beaten us there. Or perhaps it will be our ancestral Mars life welcoming us home. In neither case will we have to worry much about harmful contamination.

But what about beneficial contamination?
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sun Nov 03, 2013 3:08 am

PART 2 OF 2 (CH. 11 CONT'D.)

Mars, as far as we can tell, is a dead world. Even if it turns out to host some forms of life, they are almost certain to be limited to bacteria, akin to the extremophiles that populate places like volcanoes, undersea thermal vents, and deep subsurface rock formations; and their distribution is likely to be similarly circumscribed. Algae would be big, big news.

But Mars needn't remain dead (or near-dead). For several decades people have been looking at "terraforming" Mars by giving it an earthlike -- or at least more earthlike -- climate. (For the technically inclined, a superb engineering textbook on the subject is Martyn Fogg's Terraforming: Engineering Planetary Environments, a thoroughly practical book published by the thoroughly practical SAE. [10]) In essence, the process would involve setting up factories that would produce artificial greenhouse gases (Bob Zubrin and Chris McKay suggest perfluoromethane, CF4). In his recent book Entering Space, Zubrin notes:

If CF4 were produced and released on Mars at the same rate as chlorofluorocarbon (CFC) gases are currently being produced on Earth (about 1000 tonnes per hour), the average global temperature of the Red Planet would be increased by 10 degrees C within a few decades. This temperature rise would cause vast amounts of carbon dioxide to outgas from the regolith, which would warm the planet further, since C02 is a greenhouse gas.... The net result of such a program could be the creation of a Mars with acceptable atmospheric pressure and temperature, and liquid water on its surface within fifty years of the start of the program. [11]


The resultant atmosphere wouldn't be breathable by humans yet, but it would support crops and allow people to walk around outside with no more than an oxygen mask in the years before a fully breathable atmosphere could be established. How much is a whole new planet worth?

Mars currently has a dry-land area approximating that of Earth. A terraformed Mars would have a smaller dry-land area, of course, because it would have oceans, or at least seas. Nonetheless, we are talking about a huge new area for human settlement. A colonized Mars would also be a way of spreading humanity and other Earth life to new places, making the species, and human civilization, less vulnerable to natural or artificial calamity. Perhaps even more importantly, we would also derive the protection from social, cultural, and political stagnation that a frontier provides.

IT'S MY PLANET AND I'LL GRIPE IF I WANT TO

Naturally, this will make some people unhappy. Though terraforming would not, in my opinion, violate the Outer Space Treaty -- which prohibits only "harmful," not beneficial, contamination -- there are sure to be vigorous objections raised from certain quarters. Indeed, such objections have already appeared in a few scattered locations.

Objections to terraforming can be roughly categorized as follows:

• the Peter Sellers objection ("Now is not the time," as Inspector Clouseau kept telling his valet, Caw.)
• the scientific objection
• the theological objection
• the human-cancer objection

The Peter Sellers objection is that terraforming efforts should not begin until we have extensive knowledge of the Martian geology and climate. Efforts that are begun too soon may not work as anticipated and might conceivably interfere with more knowledgeable (and thus more prepared) efforts later.

Little to argue with here. Though of course experts may disagree as to when we know enough, and undoubtedly people opposed to terraforming on other grounds may for political reasons raise this objection rather than reveal their true motives, the basic principle is sound. Martian terraforming efforts should not go off half-cocked. The good news is that the need for a solid database on Martian climate and geology makes today's unmanned missions -- which space settlement enthusiasts view as unexciting -- quite valuable. We're simply not in a position to begin terraforming efforts on Mars now, but by advancing our knowledge of important factors we nonetheless hasten the day when it will take place. Think of the robotic probes visiting Mars as the latter-day equivalents of Lewis and Clark or Zebulon Pike.

The scientific objection may be viewed as a near-cousin of the Sellers objection. Once terraforming efforts begin in earnest, information about the primeval Mars will be lost. Scientists can thus be expected to protest that terraforming should not begin until all interesting data about Mars in its current state have been extracted. Unfortunately, that is a task that will never be entirely completed, meaning that we will have to weigh the value of additional scientific data (which is likely to be significant) against the value of an entire new world for settlement, which is likely to be colossal.

The theological objection involves no such trade-offs, but rather an assertion that human beings simply are not meant to settle other planets -- a variation on the old "If man were meant to fly he'd have wings" argument from the nineteenth century. Variants of this argument, in keeping with certain ideas of quasi-religious Deep Ecology adherents, might say that the "pristine" character of an "unspoiled" Mars is of such enormous, even "sacred," value that no development -- or perhaps even human exploration -- should be permitted.

As the use of words like "unspoiled" and "pristine" suggests, this is fundamentally an aesthetic view masquerading as a religious one. (And, indeed, the world's major religions offer precious little support to such a view.) One might plausibly prefer an empty, dead Mars over a living, vibrant one, just as one might plausibly prefer the Backstreet Boys to the Beatles. But, since such views are founded in taste, such views do not lend themselves well to rational debate, nor are they likely to prove persuasive to those who are not already predisposed to them.

The human-cancer objection is essentially a stronger version of the theological objection: humanity is so awful, such a blight on the face of the earth, that the last thing we should want is for people to spread everywhere else, carrying their nastiness with them and polluting everything they contact.

It is always a surprise to me that people who view humanity as a cancer somehow continue to live, and even to raise children, rather than committing the honorable suicide that self-diagnosis as a cancer cell would seem to call for. But the human mind is entirely capable of holding contradictory views as it operates. And this view does characterize certain members of the environmental movement, more concerned with saving nature from evil humans than preserving it for our enjoyment.

I predict that these peculiarly gnostic environmentalists will be most vocal in opposing terraforming efforts. And by speaking out against the terraforming of a dead Mars, or even a Mars inhabited by bacteria and lichens, such people will be showing their true colors. After all, one may be motivated to protect a sequoia forest either from hatred of loggers or for love of trees. But those opposing development of rocks and sand are pretty obviously not acting out of concern for any kind of life.

So pay attention to who denounces proposals for Martian terraforming as they begin to appear more frequently in mainstream discourse. It will not only be of interest in itself, but will tell you something about how you ought to view the denouncers' other positions.

I'm not the only one to look at this question. Robert Pinson recently wrote an article for the Environmental Law Reporter on the environmental ethics of terraforming Mars. After surveying the arguments pro and con, Pinson concludes:

The most applicable environmental ethic to terraforming Mars is anthropocentrism. It puts our interests at the forefront while still ensuring the existence of all life. It seems obvious that we should give ourselves the highest level of intrinsic worth since we are the ones placing the value. Life, of course, has the ultimate intrinsic worth, but we are a part of that life. It is in our best interest to preserve and expand life. What better way than by changing a planet that is currently unable to sustain life into one that can. Not only will we enrich our lives but also the life around us. We cannot, of course, begin terraforming today, but we can research and plan for the future. [12]


Of course, it's possible that the people who will be making such decisions won't be inhabitants of Earth, but rather settlers on Mars.

A MARTIAN CONSTITUTION

In response to a column of mine on Mars awhile back, reader Philip Shropshire posted a comment asking: ''I'm curious as to what you think. Would you prefer to live under the American Constitution on Mars, or a new constitution that you designed yourself ... in case you're looking for next week's column material."

Well, I'm always happy for suggestions (and, in fact, I did get a column out of this one), but this isn't actually a new idea. In fact, the Smithsonian Institution, in cooperation with Boston University's Center for Democracy, produced a set of principles for creating a new constitution to govern human societies on Mars and elsewhere in outer space. Fellow lawyer John Ragosta and I drafted an alternative proposal that was published in the American Bar Association's journal of law, science, and technology, jurimetrics. [13]

Shropshire makes it easy, of course: I'd rather live under a new constitution that I designed myself. It's the constitutions designed by other people that worry me. On second thought, the United States Constitution isn't perfect, but it's lasted a long time, through all sorts of stresses, without producing the sort of tyranny or genocide that has been all-too-common elsewhere, even in countries we generally regard as civilized. So perhaps it's been demonstrated to be "fault tolerant."

But the interesting (and worrying) thing about proposals for new constitutions for outer space is that they mostly take it for granted that the United States Constitution offers too much freedom. Writing in Ad Astra magazine some years ago, William Wu observed that "[s]pace colonists may face life on a political leash," and compared space colony life to that in an oppressive company town.

In a company town, freedom of expression may be in danger. Democracy permits citizens to make public statements about political figures that they would never say openly about their immediate bosses or top-level officers of the companies for which they work. The security and efficiency of a well-organized and well-run company town in space might be politically stifling.... The colonization of space may point toward a weakening of individual rights and a strengthening of government power. [14]


The participants in the Smithsonian conference on space governance seemed to feel the same way, stressing the need to balance individual freedoms against the needs of the community and emphasizing a wide array of social controls: "The imperatives of the community safety," they wrote, "and individual survival within the unique environment of outer space shall be guaranteed in harmony with the exercise of such fundamental individual rights as freedom of speech, religion, assembly, contract, travel to, in and from outer space, media and communications." [15]

There's no similar provision in the United States Constitution, and this probably reflects the participants' belief that in space we won't be able to afford as much freedom as we can on Earth. Space as less free than Earth? This view is probably wrong, but nonetheless it concerns me a great deal. It is probably wrong because all of the available evidence is that things don't work this way. Although there are some simulated Mars bases on Earth now, the closest current analogs to a space colony are Antarctic bases. But these are not harsh, dictatorial environments. By contrast, the kinds of conditions that Antarctic crews face tend to force the abandonment of traditional hierarchical systems in favor of more flexible ones. It's more freedom for the little guy, not less. As Andrew Lawler writes:

A winter base in Antarctica is a unique world, where the cook often has greater prestige than the officer-in-charge and the radio operator can have more influence than an accomplished scientist. The traditional hierarchical structure of the military, and of government as a whole, breaks down among a small group of people isolated from others for months at a time. This was a controversial and embarrassing realization for the Navy. Flexible authority and sharing of tasks among everyone are vital for the well-being of a small, isolated group. This can run against the grain of highly specialized scientists and career military officers. [16]


Experience, thus, tends to suggest that overly rigid and controlled environments are harmful to survival under such conditions, not essential to it. George Robinson and Harold White agree, stressing in their book Envoys of Mankind that "the real answer to [space] community success probably lies in motivated, self-actualized, strong, adventurous, unconventional, yet disciplined and well-trained human beings." [17] In other words, empowering ordinary people is the key to success.

A proscriptive attitude toward liberties in space societies (or even the suggestion of such an attitude) worries me because I believe that, consciously or unconsciously, the way we envision space societies mirrors the way we see our own society in many ways. Many characteristics of space societies, such as strong dependence on advanced technology; problems with maintaining environmental quality; the need for people to work together under stress; and individuals' strong dependence upon their society for basic necessities such as food and water are simply amplified images of characteristics already present, and growing, in our own society.

This is a good reason for being interested in space societies, since by studying their problems we gain a window into our future on Earth. It is also a reason to be worried. For if there is a general belief that a high level of interdependence and environmental fragility means that space settlers will not be able to afford individual rights, then what of those of us who remain on Earth under similar conditions? I don't think that the march of technology has made individual rights obsolete, bur I worry that others may think so. And I believe that it is wrong. Just as space societies will need access to the creativity and individual initiative of their inhabitants to flourish, so will societies on Earth. Surely the failure of totalitarian societies worldwide to achieve any kind of social -- or even material -- greatness illustrates that. The role of technology has generally been to make us freer, not less free.

In fact, I think that although early Mars societies will not offer certain kinds of freedoms that we enjoy on Earth -- such as the freedom to be nonproductive sponges living off the labors of others -- they will offer more freedom for individuals to make something of themselves. Mars visionary Bob Zubrin agrees and compares the settlement of Mars to the settlement of the American West:

The frontier drove the development of democracy in America by creating a self-reliant population that insisted on the right to self-government. It is doubtful that democracy can persist without such people....

Democracy in America and elsewhere in western civilization needs a shot in the arm. That boost can only come from the example of a frontier people whose civilization incorporates the ethos that breathed the spirit into democracy in America in the first place. As Americans showed Europe in the last century, so in the next the Martians can show us the way away from oligarchy. [18]


I think he's probably right, and it is this notion of space as an empowering frontier that animates many space advocates. I also suspect that, being populated by people willing to undertake a tremendous life-altering journey in order to make something of themselves, Mars will be home to those who are unwilling to be subjected to the sort of pointless regulation that is all too often the rule on Earth. In the face of such regulation, they'll start writing their own constitutions, and what we earthlings have to say about it will matter very little.

This is as it should be. But it's not clear whether those people will be descended from Americans or other nations -- most notably China -- that have considerable interest in space themselves.

RED STAR RISING?

China's agenda is ambitious. Crews of two astronauts. Space walks. Docking. There are two responses to this. One is dismissive: "Ho, hum. We did this with Gemini forty years ago. The Chinese are way behind the curve." The other is paranoid: "Oh, no! The Chinese are going to take over outer space!"

Both are unjustified, though the first probably more than the second. Yes, China is playing catch-up, doing things we used to do that to them are new. It's easy to dismiss this sort of thing, I suppose, just as it was easy to make fun of the first wave of Japanese automobiles to hit American shores. They really weren't very good, compared to the American cars of the day. Yet, as Detroit learned, such amusement was temporary and expensive: the American cars stayed about the same, while the Japanese cars got steadily better. The same thing may be happening in space; at least, we shouldn't ignore the possibility.

Although the Chinese are playing catch-up right now, they're likely to experience the second-mover's advantage. It's easier to catch up than to forge new ground. And although China is vastly poorer and weaker than the United States is today, in terms of absolute capabilities, the gap between the China of today and the United States of, say, 1965 is much closer, with China actually ahead in quite a few capabilities. Plus, they know what's possible; we were trying to figure that out.

Then, too, the absence of any real forward progress since the 1970s on the part of the United States means that China doesn't have all that much catching up to do.

The bottom line: Our position is not so advanced that we can afford to look down on the Chinese. A determined China could leapfrog us in a variety of ways in a surprisingly short time. But that, of course, is no reason to be paranoid either. While China could surpass us, competition between the United States and China might just be a good thing.

First, as I've noted here before, there's good reason to believe that humanity won't survive over the long term (or even the not-so- terribly-long term) if we don't settle outer space. From that perspective, anything that jumpstarts the process again should be welcome. It's no coincidence that the United States' forward progress in space started to fade as soon as the contest with the Soviets ended. A new competition might encourage more effort -- and more focus -- on our part.

Second, the United States and China are, almost inevitably, going to begin competing more across a variety of fronts. Better that we should be competing in space than in some more dangerous arenas here on Earth. Many people believe, in fact, that the space race helped to defuse the tensions of the Cold War, and some think that this was part of President Kennedy's purpose in setting out a lunar landing as a public goal. (Jerome Wiesner, who served as an adviser to JFK, once told me that this was very much Kennedy's intent.)

GOLIATHS IN SPACE

As much as I am a fan of small-scale approaches like the X-Prize, smaller is only sometimes better, and Goliaths have their own virtues. In the case of space travel there's one big approach that was abandoned long ago, but that may well come back, with help from the Chinese.

In the old science fiction movies, spaceships looked like, well, ships. They had massive steel girders, thick bulkheads, and rivets everywhere. And big crews, with bunks, staterooms, and mess halls. Now we know better of course: spaceships aren't big, massive constructions made of steel. They're cramped gossamer contraptions of composites and exotic alloys designed to keep the weight down.

It might have turned out differently. In his recent book Project Orion: The True Story o/the Atomic Spaceship, [19] George Dyson tells the story of an engineering effort that, but for a treaty or two and a lot of bureaucratic infighting, might have given us spaceships with rivets.

Orion was a nuclear-propelled spaceship. And by "propelled," I mean propelled The idea was to propel a spaceship by means of nuclear explosions. The explosions would come from specially constructed bombs. The bombs would be ejected and explode a few dozen meters behind a large pusher plate. The plate would absorb much of the blast, convert it to momentum, and transfer that momentum to the rest of the ship via a system of shock absorbers.

It's not terribly surprising, of course, that if you set off a nuclear explosion next to a large object, the object in question will move. The surprising discovery was that you could do that without destroying said object. But experiments demonstrated that properly treated substances could survive intact within a few meters of an atomic explosion, protected from vaporization by a thin layer of stagnating plasma. [20] The original idea had been Stanislaw Ulam's in 1948, but beginning in 1958 physicists Ted Taylor and Freeman Dyson (author George Dyson's father) worked with numerous other scientists and engineers to design a 4,000-ton spacecraft that would take advantage of this fact to extract motive force from atomic explosions. And, yes, I really did write 4,000 tons; Orion was big, clunky, and mechanical -- featuring springs, hydraulic shock absorbers, and other nineteenth-century-style accoutrements. To handle the shock, it needed to be big. It probably would have had rivets.

In fact, one of the greatest appeals of Orion was that the bigger you made it, the better it worked. While chemical rockets scale badly -- with big ones much harder to build than small ones -- Orion was just the opposite. That meant that large spacecraft, capable of long missions, were not merely possible, but actually easier to build, for a variety of reasons, than small ones. Bigger spaceships meant more mass for absorbing radiation and shock, more room to store fuel, and so on. As Freeman Dyson wrote in an early design study from 1959, "The general conclusion of the analysis is that ships able to take off from the ground and escape from the Earth's gravitational field are feasible with total masses ranging from a few hundred to a few million tons. The payloads also range from zero to a few million tons." [21]

The appeal of the project was such that its unofficial motto became "Saturn by 1970," [22] and those working on it believed that they would be able to build a ship capable of exploring the outer planets -- indeed, capable of crossing the solar system in mere months -- in time to make that trip. And why not? America was already very good at building atomic bombs and had plenty of them. The other design problems (chiefly involving resonances in the pusher plate against which the nuclear shock wave struck, and in the shock-absorption system coupling the plate to the rest of the ship) were genuine, but they were mechanical engineering problems more akin to those involved in locomotive design than rocket science. In many ways, Orion was the ultimate "Big Dumb Booster." And though it was big, the program that designed it was small.

The scientists and engineers who worked on Orion were confident. "It was dead serious," as one says. "If we wanted to do it, if there were any good reason for wanting to have high specific impulse and high thrust at the same time, we could go out and build Orion right now. And I think it would make a lot of sense." [23]

But it didn't happen, of course.

There were several problems. One was the 1963 Limited Test Ban Treaty, [24] which forbids nuclear explosions in the atmosphere and in outer space. As Dyson makes clear, the United States might well have negotiated an exception to the treaty for projects like Orion, but chose not to because it was already committed to a chemical-rocket path to the moon, and NASA was uninterested in another big project. Meanwhile, the Air Force, another potential sponsor, couldn't come up with any plausible military reasons for 4,000-ton interplanetary spacecraft. There were some halfhearted efforts by space enthusiasts within the USAF to come up with such missions, which led to President Kennedy being deeply unimpressed by a more-than-man-sized model of an Orion-powered "space battleship," but no one was fooled. This bureaucratic warfare was another problem; Orion lacked a sufficient constituency, and it threatened too many people's rice bowls.

So what about now? Could Orion ever come back? The answer is yes. The Test Ban Treaty is a real obstacle to any future deployment of Orion; however, it binds only a few nations, and many nations (like India and China) that are both nuclear-capable and interested in outer space have never signed it. For an up-and-coming country looking to seize the high ground in space in a hurry, Orion could have considerable appeal. And, of course, even the United States could withdraw from the treaty, on three months' notice, under its own terms.

Orion's scientists weren't worried about fallout. Orion would have produced some, but the amount would have been tiny compared to what was being released already from above-ground tests, and there was hope that additional work would have produced even cleaner bombs designed specifically for propulsion. Today, people are much more nervous about radiation, and under current political conditions a ground-launched Orion is a nonstarter, at least in Western countries. But not everyone cares as much about radiation, and indeed the countries that worry about it the least are those most likely to find Orion appealing as a way to attain space supremacy in a hurry. What's "Orion" in Chinese?

Some have suggested that the 1967 Outer Space Treaty, which forbids placing "nuclear weapons or any other kinds of weapons of mass destruction" in outer space, would also be a barrier to Orion, bur I don't think so.

A nuclear "bomb" used for space travel, arguably, isn't a "weapon." It's a tool -- just as the Atlas rockets that launched the Mercury astronauts were, because of their use, different from the otherwise identical Atlas missiles aimed at the Soviet Union. (When asked about the difference, Kennedy responded: "attitude." [25])

The technical data from Orion are still around. (In fact, much of the design software is still in use, applied to other military and nuclear projects.) Dyson's book contains a large technical appendix, listing much declassified information. Much other information is still classified, even after nearly forty years. Will Orion come to pass in the twenty-first century? I wouldn't bet against it. Hmm. Saturn by 2020? It could happen -- but not necessarily because of anything Americans do.

At any rate, a China (or, for that matter, perhaps even an India) looking to make a splash and anxious to get around the United States' supremacy in military (and civilian) space activity might well consider Orion to be appealing. China is not a signatory to the Limited Test Ban Treaty, [26] so that legal barrier would be out of the way. China has acceded to the 1967 Outer Space Treaty, but that treaty bans only the stationing of nuclear "weapons" in outer space, and there is, as I've noted, a plausible argument that nuclear explosives designed to propel a spacecraft are not "weapons."

With international law thus neutralized, the only remedy would be for people to either (1) start a war; or, short of that, (2) to threaten to shoot down the spacecraft, which probably would amount to starting a war anyway. Jimmy Carter, the least bellicose of American presidents, said that an attack on a U.S. satellite or spacecraft would be treated as an act of war, and it seems unlikely that the Chinese would take a more pacific approach than Carter. [27]) And even if shooting down the spacecraft were thought unlikely to lead to war, it would be unlikely to succeed -- the Orion spacecraft would be huge, fast, and designed to survive in the neighborhood of a nuclear explosion: a very difficult target indeed.

The chief restraint on China would thus be world opinion, something to which the Chinese have not shown themselves particularly susceptible.

Much of the physics and engineering behind Orion is already well-known, and -- given that American designers working with puny 1960-vintage computer technology saw the problems as tractable -- it's very likely that the Chinese could manage to design and build an Orion craft within a few years of deciding to.

Hiding Orion-related work probably wouldn't be very hard either. China already has extensive space and nuclear-weapons programs, which would tend to conceal the existence of Orion-type research. And much of the necessary research and design work on Orion, involving, as it does, things like the resonance of huge steel plates and massive hydraulic shock absorbers, wouldn't look like space-related research, even to an American intelligence agency that discovered it. At least, not unless the intelligence analysts were familiar with Orion and had the possibility in mind. And how likely is that, unless they've read this book, or George Dyson's?

SPACE IN TIME

Will we wake up one day to find that a 4,000-ton Chinese spacecraft has climbed to orbit from inner Mongolia on a pillar of nuclear fireballs and is now heading to establish a base on the moon? It wouldn't be the first time America has had such a surprise, now would it?

I hope, however, that things will take a different path. While the "big dumb booster" approach embodied by Orion has its virtues, I'd rather see lots of small, smart boosters, of the sort pioneered by Burt Rutan and the other X-Prize contestants. The twentieth-century dystopian view of space as a barren battlefield dominated by big government could still come true, but it's more likely that we'll see something like the nineteenth-century American West: space settled by companies and groups of individuals, using the best available technology, with help from the government but not as a government enterprise. With luck, the result will be the kind of society of empowered individuals envisioned by Bob Zubrin.

J. Storrs Hall believes we'll see something like that and thinks that the vastly greater capabilities and lowered costs made possible by nanotechnology will be the key. Hall writes that the technology that has already revolutionized information will soon revolutionize matter. When your word processor launches, the brief pause before the screen opens involves the equivalent of about two thousand years of pen-and-paper calculation, made almost instantaneous by the superior information-processing ability of computers. The superior matter-processing power of nanotechnology will make launching spacecraft more efficient, too: "As nanotechnology matures, the same ability to throw what would have been enormous efforts at the most trivial problems will come to the physical world as we have in the software world now. Living in space is dangerous and prohibitively expensive with current technology; it will be cheap, easy, and safe with advanced nanotechnology. " [28]

Space development and advanced technology will produce a virtuous circle. Advances in science and engineering will let us settle outer space by empowering people technologically. Space societies, by empowering people politically and socially as frontiers tend to do, will produce new technologies that will expand human potential even further.

Let us make it so.
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Mon Nov 04, 2013 6:32 am

12: THE APPROACHING SINGULARITY

Individuals are getting more and more powerful. With the current rate of progress we're seeing in biotechnology, nanotechnology, artificial intelligence, and other technologies, it seems likely that individuals will one day -- and one day relatively soon -- possess powers once thought available only to nation-states, superheroes, or gods. This sounds dramatic, but we're already partway there.

Futurists use the term "Singularity" to describe the point at which technological change has become so great that it's hard for people to predict what would come next. It was coined by computer scientist and science fiction writer Vernor Vinge, who wrote that the acceleration of technological progress over the past century has itself taken place at an accelerating rate, leading him to predict greater-than-human intelligence in the next thirty years, and developments over the next century that many would have expected to take millennia or longer. He concluded: "I think it's fair to call this event a singularity .... It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it." [1]

A lot more people see it coming now -- in fact, a lot more people see it coming, and are writing about it now, than in 1993 when Vinge wrote these words.

WE'RE ALL SUPERMEN NOW

One question is just how much, using technologies like nanotechnology and genetic engineering, we should improve on the human condition. My own feeling is "a lot" -- it seems to me that there's plenty of room for improvement -- but others may feel differently. If we choose to improve, will we become superheroes or something like them?

Should we?

My six-year-old nephew, Christopher, wants to be a superhero. It was Superman for a while, then Spiderman. (Short-lived enthusiasm for the Incredible Hulk didn't survive the lameness of the film, apparently.)

And really, who wouldn't want to be a superhero of some sort? It's not so much the cape or the crime fighting that lies behind this sentiment. It's the way that superheroes don't have to deal with the limitations that face the rest of us. It's easy to see why kids, whose everyday limitations place them in a position that is obviously inferior to that of adults, would be so excited about super powers. But even as adults we face limitations of speed and strength and -- especially -- vulnerability to all kinds of pain, to death. The idea of being able to do better seems pretty attractive sometimes, even if we don't fantasize about being members of the Justice League any more.

Will ordinary people have better-than-human powers one day? It's starting to look possible and some people are talking about the consequences. Joel Garreau makes the superhero angle explicit in his book Radical Evolution:

Throughout the cohort of yesterday's superheroes -- -Wonder Woman, Spiderman, even The Shadow, who knows what evil lurks in the hearts of men -- one sees the outlines of technologies that today either exist, or are now in engineering .... Today, we are entering a world in which such abilities are either yesterday's news or tomorrow's headlines. What's more, the ability to create this magic is accelerating. [2]


Yes, it is. The likely consequences are substantial. Running as fast as light, a la The Flash, might be out of the question, and web slinging is unlikely to catch on regardless of technology. But other abilities, like super strength, x-ray vision, underwater breathing, and the like are not so remote. (The dating potential promised by The Elongated Man's abilities, meanwhile, may produce a market even for those second-tier superpowers.) Regardless, transcending human limitations is part of what science and medicine are about. We're already doing so, in crude fashion, with steroids, human growth hormone, and artificial knees. More sophisticated stuff, like cochlear implants, is already available, and far better is on the way.

Would I like to be smarter? Yes, and I'd be willing to do it via a chip in my brain, or a direct computer interface. (Actually, that's already prefigured a bit in ordinary life too, as things like Google and Wi-Fi give us access to a degree of knowledge that would have seemed almost spooky not long ago, but that everyone takes for granted now.) I'd certainly like to be immune to cancer, or viruses, or aging. But these ideas threaten some people who feel that Out physical and intellectual limitations are what make us human.

But which limitations, exactly? Would humanity no longer be human if AIDS ceased to exist? What about Irritable Bowel Syndrome? Was Einstein less human? If not, then why would humanity be less human if everyone were that smart? It may be true, as Dirty Harry said, that "a man's got to know his limitations." But does that mean that a man is his limitations? Some people think so, but I'm not so sure. Others think that overcoming limitations is what's central to being human. I have to say that I find that approach more persuasive.

These topics (well, probably not the Irritable Bowel Syndrome) were the subject of a conference at Yale on transhumanism and ethics. The conference was covered in a rather good article in The Village Voice, which reports that many in the pro-transhumanist community expect to encounter considerable opposition from Luddites and, judging by the works of antitechnologists like Francis Fukuyama and Bill McKibben, that's probably true. [3]

I suspect, however, that although opposition to human enhancement will produce some cushy foundation grants and book contracts, it's unlikely to carry a lot of weight in the real world. Being human is hard, and people have wanted to be better for, well, as long as there have been people. For millennia, various peddlers of the supernatural offered answers to that longing -- from spells and potions in this world, to promises of reward in the next. Soon they're going to face stiff competition from science. The success of these students of human nature suggests that the demand for human improvement is high -- probably high enough to overcome any barriers. (As Isaac Asimov once wrote, "It is a chief characteristic of the religion of science, that it works." [4])

At any rate, nothing short of a global dictatorship -- whether benevolent, as featured in some of Larry Niven's future histories, or simply tyrannical, as seems more likely -- or a global catastrophe is likely to stop the rush of technological progress. In fact, as I look around, it seems that we're living in science fiction territory already.

Take, for example, this report from the Times of London: "Scientists have created a 'miracle mouse' that can regenerate amputated limbs or badly damaged organs, making it able to recover from injuries that would kill or permanently disable normal animals." From nose to tail, the mouse is totally unique in the animal kingdom for its ability to regrow its nose and tail -- and heart, joints, toes, and more. But the revolution isn't complete with Mickey's new limbs. The more fascinating prospect is that this trait can be replicated in other mice by transplanting cells from the "miracle mouse." "The discoveries raise the prospect that humans could one day be given the ability to regenerate lost or damaged organs, opening up a new era in medicine." [5]

Limb regeneration and custom-grown organs! Bring it on! Then there are the ads I'm seeing for offshore labs offering stem cell therapy to Americans. I don't know whether this particular therapy lives up to its claims, but if it doesn't, the odds are that other places soon will be offering therapy that does (see the mouse story above).

Meanwhile, Cambridge University just held the second conference on Scientifically Engineered Negligible Senescence. At the conference, people discussed ways of slowing, halting, or even reversing the aging process. [6] There was also a conference on medical nanotechnology, [7] while elsewhere nanotechnologists reported that they had produced aggregated carbon nanorods [8] that are harder than diamond.

On a more personal note, my wife recently went to the doctor, where they downloaded the data from the implanted computer that watches her heart, ready to step in to pace her out of dangerous rhythms or shock her back into normal rhythms if things went too badly. I remember seeing something similar in a science fiction film when I was a kid, but now it's a reality. And, of course, I now get most of my news, and carry on most of my correspondence, via media that weren't in existence fifteen years ago.

THE FUTURE ISN'T THE FUTURE

I mention this because as we look at the pace of change, we tend to take change that has already happened for granted. But these stories now (except for my wife's device, which isn't even newsworthy today) are just random minor news items that I noticed over a period of a week or two, even though they would have been science-fictional not long ago. Much as we get "velocitized" in a speeding car, so we've become accustomed to a rapid pace of technological change. This change isn't just fast, but continually accelerating. The science-fictional future isn't science-fictional. Sometimes, it's not even the future any more.

Nonetheless, we'll probably see much more dramatic change in the next few decades than we've seen in the last. So argues Ray Kurzweil in his new book, The Singularity Is Near: When Humans Transcend Biology.

Kurzweil notes the exponential progress in technological improvement across a wide number of fields and predicts that we'll see artificial intelligences of fully human capability by 2029, along with equally dramatic improvements in biotechnology and nanotechnology. (In fact, these developments tend to be self-reinforcing -- better nanotechnology means better computers and better understanding of biology; better computers mean that we can do more with the data we've got, and progress more rapidly toward artificial intelligence, and so on.)

The upshot of this is that capabilities now available only to nation-states will soon be available to individuals. That's not surprising, of course. I've probably got more computing power in my home (where we usually have nine or ten computers at anyone time) than most nation-states could muster a few decades ago, and it does, in fact, allow me to do all sorts of things that individuals couldn't possibly have done on their own until such power became available. But the changes go beyond computers, which merely represent the first wave of exponential technological progress. People will have not only intellectual but physical powers previously unavailable to individuals. Changes will come faster and thicker than we have seen from the computer revolution so far.

Kurzweil discusses the Singularity, and what it's likely to mean, in excerpts from the following interview originally done for my blog, InstaPundit. [9] I encourage you to read his book, though, because the Singularity is, in a sense, the logical endpoint of the many near-term trends and events described in this book. The world is changing in a big way, and my reports might be likened to those from a frontline correspondent, while Kurzweil's writings are more in the nature of a strategic overview.

Reynolds: Your book is called The Singularity Is Near and -- as an amusing photo makes clear -- you're spoofing those "The End is Near" characters from the New Yorker cartoons.

For the benefit of those who aren't familiar with the topic, or who may have heard other definitions, what is your definition of "The Singularity"? And is it the end? Or a beginning?

Kurzweil: In chapter 1 of the book, I define the Singularity this way: "a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one's view of life in general and one's own particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a 'singularitarian.'"

The Singularity is a transition, but to appreciate its importance, one needs to understand the nature of exponential growth. On the one hand, exponential growth is smooth with no discontinuities, and values remain finite. On the other hand, it is explosive once we reach the "knee of the curve." The difference between what I refer to as the "intuitive linear" view and the historically correct exponential view is crucial, and I discuss my "law of accelerating returns" in detail in the first two chapters. It is remarkable to me how many otherwise thoughtful observers fail to understand that progress is exponential, not linear. This failure underlies the common "criticism from incredulity" that I discuss at the beginning of the "Response to Critics" chapter.

To describe these changes further, within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses, "experience beaming," and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned. But all of this is just the precursor to the Singularity. Nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. We'll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity.

Reynolds: Over what time frame do you see these things happening? And what signposts might we look for that would indicate we're approaching the Singularity?

Kurzweil: I've consistently set 2029 as the date that we will create Turing test-capable machines. We can break this projection down into hardware and software requirements. In the book, I show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a functional equivalent to all the regions of the brain. Some estimates are lower than this by a factor of 100. Supercomputers are already at 100 trillion (1014) cps, and will hit 1016 cps around the end of this decade. Two Japanese efforts targeting 10 quadrillion cps around the end of the decade are already on the drawing board. By 2020, 10 quadrillion cps will be available for around $1,000. Achieving the hardware requirement was controversial when my last book on this topic, The Age of Spiritual Machines, carne out in 1999, but is now pretty much of a mainstream view among informed observers. Now the controversy is focused on the algorithms ....

In terms of signposts, credible reports of computers passing the full Turing test will be a very important one, and that signpost will be preceded by non-credible reports of successful Turing tests.

A key insight here is that the nonbiological portion of our intelligence will expand exponentially, whereas our biological thinking is effectively fixed. When we get to the mid-2040s, according to my models, the non biological portion of our civilization's thinking ability will be billions of times greater than the biological portion. Now that represents a profound change.

The term "Singularity" in my book and by the Singularity-aware community is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we, with our limited biological brains, imagine what our future civilization, with its intelligence multiplied billions and ultimately trillions of trillions fold, will be capable of thinking and doing? Nevertheless, just as we can draw conclusions about the nature of black holes through our conceptual thinking, despite never having actually been inside one, our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. That's what I've tried to do in this book.

Reynolds: You look at three main areas of technology, what's usually called GNR for Genetics, Nanotechnology, and Robotics. But it's my impression that you regard artificial intelligence -- strong AI -- as the most important aspect. I've often wondered about that. I'm reminded of James Branch Cabell's Jurgen, who worked his way up the theological food chain past God to Koschei The Deathless, the real ruler of the Universe, only to discover that Koschei wasn't very bright, really. Jurgen, who prided himself on being a "monstrous clever fellow," learned that "Cleverness was not on top, and never had been." [10] Cleverness isn't power in the world we live in now -- it helps to be clever, but many clever people aren't powerful, and you don't have to look far to see that many powerful people aren't clever. Why should artificial intelligence change that? In the calculus of tools-to-power, is it clear that a ten-times-smarter-than-human AI is worth more than a ten megaton warhead?

Kurzweil: This is a clever -- and important -- question, which has different aspects to it. One aspect is what is the relationship between intelligence and power? Does power result from intelligence? It would seem that there are many counterexamples.

But to piece this apart, we first need to distinguish between cleverness and true intelligence. Some people are clever or skillful in certain ways but have judgment lapses that undermine their own effectiveness. So their overall intelligence is muted.

We also need to clarify the concept of power as there are different ways to be powerful. The poet laureate may not have much impact on interest rates (although conceivably a suitably pointed poem might affect public opinion), but s/he does have influence in the world of poetry. The kids who hung out on Bronx street corners some decades back also had limited impact on geopolitical issues, but they did play an influential role in the creation of the hip hop cultural movement with their invention of break dancing. Can you name the German patent clerk who wrote down his daydreams (mental experiments) on the nature of time and space? How powerful did he turn out to be in the world of ideas, as well as on the world of geopolitics? On the other hand, can you name the wealthiest person at that time? Or the U.S. secretary of state in 1905? Or even the president of the U.S.? ...

Reynolds: It seems to me that one of the characteristics of the Singularity is the development of what might be seen as weakly godlike powers on the part of individuals. Will society be able to handle that sort of thing? The Greek gods had superhuman powers (pretty piddling ones, in many ways, compared to what we're talking about) but an at-least-human degree of egocentrism, greed, jealousy, etc. Will post-Singularity humanity do better?

Kurzweil: Arguably, we already have powers comparable to the Greek gods, albeit, as you point out, piddling ones compared to what is to come. For example, you are able to write ideas in your blog and instantly communicate them to just those people who are interested. We have many ways of communicating our thoughts to precisely those persons around the world with whom we wish to share ideas. If you want to acquire an antique plate with a certain inscription, you have a good chance of quickly finding the person who has it. We have increasingly rapid access to our exponentially growing human knowledge base.

Human egocentrism, greed, jealousy, and other emotions that emerged from out evolution in much smaller clans have nonetheless not prevented the smooth, exponential growth of knowledge and technology through the centuries. So I don't see these emotional limitations halting the ongoing progression of technology.

Adaptation to new technologies does not occur by old technologies suddenly disappearing. The old paradigms persist while new ones take root quickly. A great deal of economic commerce, for example, now transcends national boundaries, but the boundaries are still there, even if now less significant.

But there is reason for believing we will be in a position to do better than in times past. One important upcoming development will be the reverse-engineering of the human brain. In addition to giving us the principles of operation of human intelligence that will expand our AI tool kit, it will also give us unprecedented insight into ourselves. As we merge with our technology, and as the nonbiological portion of our intelligence begins to predominate in the 2030s, we will have the opportunity to apply our intelligence to improving on -- redesigning -- these primitive aspects of it ....

Reynolds: If an ordinary person were trying to prepare for the Singularity now, what should he or she do? Is there any way to prepare? And, for that matter, how should societies prepare, and can they?

Kurzweil: In essence, the Singularity will be an explosion of human knowledge made possible by the amplification of our intelligence through its merger with its exponentially growing variant. Creating knowledge requires passion, so one piece of advice would be to follow your passion.

That having been said, we need to keep in mind that the cutting edge of the GNR revolutions is science and technology. So individuals need to be science and computer literate. And societies need to emphasize science and engineering education and training. Along these lines, there is reason for concern in the U.S. I've attached seven charts I've put together (that you're welcome to use) that show some disturbing trends. Bachelor degrees in engineering in the U.S. were 70,000 per year in 1985, but have dwindled to around 53,000 in 2000. In China, the numbers were comparable in 1985 but have soared to 220,000 in 2000, and have continued to rise since then. We see the same trend comparison in all other technological fields, including computer science and the natural sciences. We see the same trends in other Asian countries such as Japan, Korea, and India (India is not shown in these graphs). We also see the same trends on the doctoral level as well.

One counterpoint one could make is that the U.S. leads in the application of technology. Our musicians and artists, for example, are very sophisticated in the use of computers. If you go to the NAMM (National Association of Music Merchants) convention, it looks and reads like a computer conference. I spoke recently to the American Library Association, and the presentations were all about databases and search tools. Essentially every conference I speak at, although diverse in topic, look and read like computer conferences.

But there is an urgent need in our country to attract more young people to science and engineering. We need to make these topics cool and compelling.
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Tue Nov 05, 2013 12:31 am

Conclusion

THE FUTURE


We've seen all sorts of ways in which people are being empowered, from blogs and multimedia, to home-based manufacturing and other cottage industries, to the longer-term promise of molecular manufacturing and related technologies. So what's the big picture in a world where the small matters more?

Making predictions is always difficult. And considering the changes that strong technologies like nanotech, artificial intelligence, and genetic engineering are likely to make, predicting beyond the next few decades is especially difficult. But here are some thoughts on what it's all likely to mean, and what we should probably do to help ensure that the changes are mostly beneficial.

eBAY NATION

We're not all going to wind up working for eBay or Amazon, but as large organizations lose the economies of scope and scale that once made them preferred employers, more people are going to wind up working for themselves or for small businesses. That's probably a good thing. There doesn't seem to be a huge wellspring of love for the Dilbert lifestyle though, as I've mentioned before, most people wouldn't mind Dilbert's big-company benefits package. So eBay, with its health coverage for sellers, may be a prototype for future solutions to this dilemma.

If people are going to be doing more outside the big-organization box, and if most of our current infrastructure of health and retirement benefits and the like is built around the implicit or explicit expectation that most people will work for big businesses, it's probably time for a change.

On the smaller scale, this would suggest that it's time to make things like health insurance and retirement benefits more portable, and to make the tax code more friendly to small businesses and the self-employed. There's always a lot of lip service in that direction, but not so much actual movement. Some people might even go so far as to claim that this is an argument for single-payer national health insurance, which in theory would facilitate entrepreneurship. Given its poor record elsewhere -- and the fact that places like Canada, Britain, and Germany aren't exactly hotbeds of independent entrepreneurial activity -- I don't think I'd endorse that approach. But a mechanism that would let people operate on their own, without the very real problems that a lack of big-employer health insurance creates, would do a lot to facilitate independence. I know quite a few people who stay in their jobs because they need the health benefits; they'd be gone in a shot if they could get these benefits another way.

On a larger scale, though, it's worth looking at the role of government in general. I mentioned earlier that the big organizations in the twenty-first century will be more likely to flourish if they're organized so as to help individuals do what they want -- to take the place of older, bigger organizations in a more disintermediated way. That's what eBay, Amazon, and others do. Could a similar approach work for the government? We're a long way from that right now.

In theory, of course, our government is all about maximizing individual potential and choices. In practice, well, not so much, as Joel Miller notes in his book Size Matters: How Big Government Puts the Squeeze on Americas Families, Finances, and Freedom. [1] Miller mostly describes the problem rather than solutions. Thoughts on how to reorganize government to further those goals could easily occupy another book, but it strikes me that now is a good time to start trying to figure these things out. [2]

THE SWARM

In the chapter "Horizontal Knowledge," I discuss the rapid appearance of the World Wide Web, without any centralized planning effort, as evidence of how important horizontal knowledge and spontaneous organization have become. I've made this point before, as long ago as 2003, [3] and Kevin Kelly echoes it in a history of the Web published in Wired:

In fewer than 4,000 days, we have encoded half a trillion versions of our collective story and put them in front of 1 billion people, or one-sixth of the world's population. That remarkable achievement was not in anyone's 10-year plan .... Ten years ago, anyone silly enough to trumpet the above ... as a vision of the near future would have been confronted by the evidence: There wasn't enough money in all the investment firms in the entire world to fund such a cornucopia. The success of the Web at this scale was impossible. [4]


But it happened. As Kelly notes, everyone who pondered the Web, including many very smart people who had been thinking about communications and computers for decades and who had substantial sums of money at stake, nonetheless missed the true story: the power of millions of amateurs doing things because they wanted to do them, not because they were told to. It was an Army of Davids, doing what the Goliaths never could have managed.

Because information is easier to manipulate than matter, the Army of Davids has appeared first in areas where computers and communications are involved. But new technologies will extend the ability of people to cooperate beyond cyberspace, as well as increasing what people can do in the real world. What's more, this process will feed back upon itself. New technologies will help people cooperate, which will lead to further improvements in technology, which will lead to more efficient cooperation (and individual effort), which will lead to further improvements, and so on. This means that "swarms" of activity will start to happen on all sorts of fronts. I can imagine good swarms (say, lots of people working on developing vaccines or space technology) and bad ones (lots of people working on viruses or missiles). I expect we'll see more of the good than the bad -- just as we've seen far more coordinated good activity on the Web than bad -- but the changes are likely to surprise the experts just as they have in the past.

HORIZONTAL POLITICS

Political power used to be a pyramid. In the old days, there was a king at the top, with layers of scribes, priests, and aristocrats below. In modern times things were more diffuse, sort of. Ordinary people who wanted to have an impact needed to find an interlocutor -- typically an industrial-age institution like a labor union, a newspaper, or a political machine. And getting their ear was hard.

Not any more, as this email from a blog-reader illustrates:

I wrote you a few weeks ago about the Illinois High School Association (IHSA) adding rules to stop Catholic schools from winning too many championships. My IS-year-old son came up with his own solution. He put together his own website (www.GoHomeIHSAcom), added a blog section, did a press release, got a bunch of publicity in the newspaper, and now he has been asked to make a brief presentation of his ideas at the IHSA Board meeting tomorrow. He spent under $10 for the domain name and set up the blog for free. Three years ago this never could have happened. Is it any wonder that many of our traditional institutions hate the Internet?


No wonder at all, as you've figured out already if you've read this far. The Internet makes the middleman much less important.

This poses a real challenge to traditional political institutions. Political parties are obviously in trouble. As a commenter on a blog I read awhile back noted, mass democracy is a thing of the past -- the only problem is that it's nearly the only kind of democracy we've ever been able to make work.

Athens, of course, had a more fluid democracy, but the framers of our Constitution didn't regard its experience as a success; they were trying to prevent its problems, not emulate its excesses. There are lots of reasons to believe that unmediated democracy is a poor decision-making method, which is one reason why, in our constitutional system, democracy has always been mediated. Voters choose decision makers, rather than making decisions themselves. [5]

But if a fear of unmediated democracy led Americans to choose a system that was mediated, we must now deal with pressures toward disintermediation. The additional transparency added by the Internet is a good thing, limiting insider back scratching and deals done at the expense of constituents. On the other hand, the pressure toward direct democracy, or something very close to it, is likely to build. Is that a good idea? Probably not, unless you think that America would do better if it were run like your condo association.

The challenge in coming decades will be to take advantage of the ability for self-organization and horizontal knowledge that the Internet and other communications technologies provide without letting our entire political system turn into something that looks like an email flamewar on Usenet. I think we'll be able to do that -- most people's tolerance for flaming is comparatively low, and in a democracy, what most people tolerate matters -- but things are likely to get ugly if I'm wrong.

EXPRESS YOURSELF

But it's not just politics. People are hardwired to express themselves. Imagine two tribes of cavemen approaching a cave. Which tribe is more likely to survive -- the one where someone says, "You know, I think there's a bear in there," or the one where nobody talks? I'm pretty sure we're descended from the talkative ones.

Until pretty recently, self-expression on any sizable scale was the limited province of the rich and powerful, or their clients. Only a few people could publish books, or write screenplays that might be filmed, or see their artwork or photographs widely circulated, or hear their music performed before a crowd. Now, pretty much anyone can do that. You can post an essay (or even an entire book) on the Web, make a film, or circulate your art and photos from anywhere and have them available to the entire world.

Now that more people can do that, more people are doing it, and it seems to make them happy. Naturally, some critics complain that much of what results isn't very good. That's true, but if you look at books, films, or art from the pre-Internet era, you'll find that much of that stuff wasn't very good either. (Heaven's Gate and Gigli were not Internet productions.) As science fiction writer Ted Sturgeon once said in response to a critic's claim that 90 percent of science fiction was crap: "Ninety percent of everything is crap." [6]

And if you doubt this, spend a few minutes channel-surfing or perusing bookstore stacks at random. You may conclude that Sturgeon was being generous, not just to science fiction, but to, well, everything.

On the other hand, "crap" is always a matter of opinion. Many people write books that are very valuable to them as self-expression, regardless of whether they get good reviews or sell millions of copies. (I myself have written two novels and enjoyed the writing process very much, even though neither has ever been published.) And regardless of whether they sell millions or please critics, such books probably please some people and can now sell in smaller quantities thanks to niche publishing markets and improved printing technologies.

Novelist Bill Quick, who has published many books through traditional publishers, tried the Internet publication route with a novel of his own and was pretty happy with the results. A few weeks after placing his novel Inner Circles on the Internet, he reported that despite not having paid for advertising or an agent, selling only via an automated website linked from his weblog, he had made over $4,500. Chicken feed? No. Quick said that his book, if it had been salable at all, would have brought an advance of about $10,000, payable in two installments, which after deducting the agent's commission would have produced a first check of about $4,250. He concluded, "I have taken in more than that as of now, because I am getting all of the 'cover price,' not eight percent of it (the usual author cut on a paperback)." [7]

Quick isn't just anyone, of course. He's a widely read blogger who's published many novels in the past. He warns his readers of this, but observes, "This outcome is a godsend for those of us professionals who think of ourselves as midlist, and who used to grind out two or three books a year in order to make thirty or forty grand before taxes." This is an important point. Once you realize how little money books on paper usually pay, Internet publication looks a lot better. More significantly, money or not, I think we'll see more authors able to earn an income, or at least a second income, as the Web grows.

Before the Industrial Revolution, you couldn't really make a living as a writer unless you had someone rich funding you. Books just didn't make enough money. In the nineteenth and twentieth centuries it was possible to do well as a writer, but books and then films were necessarily mass-marketed. You had to be able to sell a lot of them to recoup the substantial cost of producing them. The products had to be somewhat appealing to a large audience because of that -- and because it was hard to find a smaller audience and hard for a smaller audience to find its author.

That's different now. It's become something of a truism to note that the Web is like a rainforest, full of niches that the well-adapted can flourish in, but like a lot of things, the expression is a truism because it's, well, true. And it's getting truer all the time as the number of people on the Web grows, thus expanding the number of potential customers; and as the tools that let people find what they really want, and not some mass market first-approximation thereof, get steadily better. Some people, of course, will always want to read the book, or see the film, or listen to the songs that lots of other people are, so there will always be a kind of mass market. But even that will be a niche of sorts, in place to address people's preferences rather than because of technological necessity.

Usually, too, when people talk about what "everyone" is reading or watching, they really mean not everyone, but everyone they know. As mass markets fragment, that may mean that people will really define things by their niches, rather than by true mass media. In fact, we're already seeing a lot of that. Another Internet truism is the replacement of Andy Warhol's line that in the future, everyone will be famous for fifteen minutes, with the statement that in the future, everyone will be famous to fifteen people. (As a so-called "celebrity blogger," I was once recognized by a gushing waitress in a restaurant while the rest of the staff stood by, uncomprehending. I wasn't in their niche, and they weren't in mine.)

At any rate, I think we're certain to see a future in which many more people think of themselves as writers, filmmakers, musicians, or journalists than in the past. This may feed back into the political equation noted above, but it could go either way. On the one hand, creative people tend to lean leftward, which suggests that if more people see themselves as creators, the country might move left. On the other hand, people have been complaining that the left has disproportionate influence in creative industries, meaning that if more people can get involved, those fields might shift back the other way, and the overrepresentation of leftist viewpoints might be countered. I suspect we'll see the latter rather than the former.

THE SINGULARITY IS NEAR

As I mentioned in the previous chapter, futurists write about something they call "The Singularity," meaning a point in the future where technological change has advanced to the point that present-day predictions are likely to be wide of the mark. By definition, it's hard to talk about what things will be like then, but the trend of empowered individuals is likely to continue. As the various items we've surveyed demonstrate, technology seems to be shifting power downward, from large organizations to individuals and small groups. Newer technologies like nanotechnology, artificial intelligence, and biotechnology will move us much further along the road, but advances in electronics and communications have gotten us started. You can write -- heck, I have written -- about the wonders to come in the future, but, in fact, we've moved a considerable distance in that direction already.

While a world of hugely and vastly empowered souls may lurk in the future, we're already living in a world in which individuals have far more power than they used to in all sorts of fields. Yesterday's science fiction is today's reality in many ways that we don't even notice.

That's not always good. With technology bestowing powers on individuals that were once reserved to nation-states, the already-shrinking planet starts to look very small indeed. That's one argument for settling outer space, of course, and many will also see it as an argument for reducing the freedom of individuals on Earth. If those latter arguments carry the day, it could lead to global repression. In its most benign form, we might see something like the A.R.M. of Larry Niven's science fiction future history, a global semisecret police force run by the United Nations that quietly suppresses dangerous scientific knowledge. In less benign forms, we might see harsh global tyranny, justified by the danger of man-made viruses and similar threats. (As I write this, scientists in a lab in Atlanta have resurrected the long-dead 1918 Spanish Flu and published its genome, meaning that people with resources far below those of nation-states will now be able to recreate one of the deadliest disease agents in history. [8])

I doubt that even a science-fictional tyranny could stamp out pervasive and inexpensive technology. Worse, it would leave most of the work in underground labs or rogue states and give people an incentive to put it to destructive use. That doesn't mean that some people won't be tempted to give tyranny a chance, especially if they can put themselves in the tyrant's seat.

On the other hand, there are lots of hopeful signs in the present -- trends that will probably continue. Today's revolutionary communications technologies led to a massive mobilization of private efforts in response to disasters like the Indian Ocean tsunami and Hurricane Katrina, and it was text-messaging, websites, and email that broke the Chinese government's SARS cover-up. The phenomenon of "horizontal knowledge" is likely to result in people organizing, both spontaneously and with forethought, to deal with future crises; and there's considerable reason to think that those responses will be more effective than top-down governmental efforts. Indeed, we may see distributed efforts -- modeled on things like SETI@home or NASA's SpaceGuard asteroid-warning project -- that will incorporate empowered individuals to look for and perhaps even respond to new technological threats.

MAKING CONNECTIONS

When I want to know something about big events in India, I tend to look first to blogs like India Uncut, by Indian journalist Amit Varma. When I want to know about military affairs, I look at blogs like The Belmont Club, The Fourth Rail, The Mudville Gazette, or military analyst Austin Bay's site. When I want to know what's going on in Iraq, I look at Iraqi blogs and blogs by American soldiers there. When one Iraqi blogger reported war crimes by American troops, I called attention to his post, got an American military blogger in Iraq to point it out to authorities, and the soldiers involved wound up being court-martialed and convicted.

Yeah, so, I read a lot of blogs. I'm a blogger, after all. But so are a lot of people, and the person-to-person contact that blogs and other Internet media promote tends to encourage person-to-person relationships across professional, political, and geographic boundaries. This is just another form of the horizontal knowledge that I wrote about before, but it may play an important role in breaking down barriers and defusing animosities across those same boundaries.

People have been saying for a century, of course, that increased international understanding would prevent war, and yet we've seen rather a lot of war over the past century. Still, it may simply be that we haven't reached the tipping point yet. Certainly there's a qualitative, as well as a quantitative difference, as more and more people make person-to-person contact on their own. It's a very different thing from watching other countries' television programs and movies, or having a few people go on tourist expeditions and attend feel-good conferences of the Pugwash variety. While this isn't likely to eliminate hostility, it will certainly transform current understanding and cultural definitions. Overall, I think that the effect is more likely to be positive than negative.

THE WORLD AS WE KNOW IT (I FEEL FINE)

And that's probably the bottom line regarding all the changes described in this book. Technology is empowering individuals and small groups in all sorts of ways, producing fairly dramatic changes as compared to the previous couple of centuries. Not all of those changes are positive -- there's bitter along with the sweet. But the era of Big Entities wasn't so great. From the Napoleonic Wars to the Soviet Gulags, the empowerment of huge organizations and bureaucracies wasn't exactly a blessing to the human spirit. A return to some sort of balance, in which the world looks a bit more like the eighteenth century than the twentieth, is likely to be a good thing.

In some sense, of course, how you view these changes depends a lot on how you view humanity. If you think that people are, more often than not, good rather than bad, then empowering individuals probably seems like a good thing. If, on the other hand, you view the mass of humanity as dark, ignorant, and in need of close supervision by its betters, then the kinds of things I describe probably come across as pretty disturbing.

I fall into the optimistic camp, though I acknowledge that there's evidence pointing both ways. Those who think I'm taking too rosy a view, however, had better hope that I turn out to be right after all. That's because the changes I describe aren't so much inevitable as they are already here, and are just in the process of becoming, as William Gibson would have it, more evenly distributed.

The Army of Davids is coming. Let the Goliaths beware.
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Tue Nov 05, 2013 1:23 am

NOTES

INTRODUCTION -- Do It Yourself


1. Damien Cave, "Rage for the Machine," Salon.com, 12 April 2000, http://http://www.salon.com/tech/log/20 ... /joy_song/. See also Dave Hallsworth, "Mobius Dick vs. the Luddites," Spiked-Online.com, 4 July 2001, http://www.spikedonline.com/ Articies/00000002D16F.htm.

CHAPTER 1 -- The Change

1. Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations, 4-5 (Modern Library, 1937). Smith got some details wrong in his description but nothing that affects his point.

2. David L. Collinson, "Managing Humor," Journal of Management Studies (May 2002), quoted in Daniel H. Pink, A Whole New Mind (Riverhead Books, 2005), 179.

3. Robert William Fogel, The Escape from Hunger and Premature Death: 1700-2100 (Cambridge University Press, 2004).

4. Fogel, 2.

5. John Kenneth Galbraith, The New Industrial State (Houghton Mifflin, 1966).

6. Neil Gershenfeld, Fab: The Coming Revolution on Your Desktop -- From Personal Computers to Personal Fabrication (Basic Books, 2005).

7. See Glenn Reynolds, "Backyard Auteurs," Popular Mechanics, October 2005, 56.

CHAPTER 2 -- Small Is the New Big

1. Jeff Jarvis, Buzzmachine blog. Available online at http://www.buzzmachine. comlindex. php12005/07 /25/ smallisthenewbighrdepartment/.

2. Louis Uchitelle, "DefYing Forecast, Job Losses Mount for a 22nd Month," New York Times, 6 September 2003. Available online at http://www. nytimes.com/2003/09/06/business/06JOBS.html?ex= 1378180800&en=8 1557 ae4e61 Of624&ei=5007 &partner= USERLAND.

3. Mickey Kaus, "Weaving the Gloom," Slate. Available online at http://slate. msn.com/id/2087872/.

4. John Scalzi, Scalzi.com. Available online at http://www.scalzi.com/whatever/ archives/000483.html.

5. Daniel Pink, Free Agent Nation: The Future of Working for Yourself (Warner Business, 2002).

6. Virginia Postrel, The Substance of Style: How the Rise of Aesthetic Value Is Remaking Commerce, Culture, and Consciousness (HarperCollins, 2003), 164-67.

7. Ralph Kinney Bennett, "Car Country," TechCentralStation, 5 September 2003. Available online at http://www.techcentralstation.com/090503A.html.

8. The FAQ on eBay's program is available online here: http://pages.ebay. com/ services/buyandsell/ powerseller/healthcareprog.html. eBay doesn't pay for the insurance, but does use its buying power to make a group plan available. Once qualified, power sellers get to keep the coverage even if their sales fall below the required minimum.

9. According to Wal-Mart's website: "We insure mote than 568,000 associates and more than 948,000 people in total, who pay as little as $17.50 for individual coverage and $70.50 for family coverage bi-weekly. Unlike many plans, after the first year, Wal-Mart's Associates' Medical Plan has no lifetime maximum for most expenses, protecting our associates against catastrophic loss and financial ruin." They also match 401(k) contributions and subsidize child care. Available online at http://www.walmartfacts.com/ associates/default.aspx#a 42.

10. Virginia Postrel, "In New Age Economics, It's Mote about the Experience Than about Just Owning Stuff," New York Times, 9 September 2004, C2.

11. Virginia Postrel, "A Prettier Jobs Picture?" New York Times Magazine, 22 February 2004, 16.

CHAPTER 3 -- The Comfy Chair Revolution

1. Ray Oldenburg, The Great Good Place: Cafes, Coffee Shops, Bookstores, Bars, Hair Salons, and Other Hangouts at the Heart of a Community (Marlowe & Co., 1999).

2. Carol Anne Douglas, "Support Feminist Bookstores!" Off Our Backs, 31 December 2000, 1.

3. Nick Hornby, High Fidelity (Riverhead, 1996).

4. Linda Baker, "Urban Renewal: The Wireless Way," Salon, 29 November 2004. Available online at http://www.salon.comltech/feature/2004/ll/29/ digital_merropolis/index_np.html.

5. "The Internet in a Cup," The Economist, 18 December 2003. Available online at http://www.economisr.com/World/ europe/ displayStory.cfm? story_id=22817 36.

6. Virginia Postrel, The Substance of Style: How the Rise of Aesthetic Value Is Remaking Commerce, Culture, and Consciousness (HarperCollins, 2003).

7. Beth Mattson, "Where Town Square Meets the Mall," Minneapolis-St. Paul BusinessJournal, 27 August 1999. Available online at http://www. bizjournals. com/twincities/stories/1999/08/30/focus3.html?page= 1.

8. Scott Morris, "A Third Place for Camano," Daily Herald (Everett, WA), 5 September 2003. Available online at http://www.heraldnet.com/Stories/ 03/9/5/17437484.cfm.

9. For much more on the subject of malls, private property, and free speech, see Jennifer Niles Coffin, "The United Mall of America: Free Speech, State Constitutions, and the Growing Fortress of Private Property," Volume 33, University of Michigan J.L., Reform 615 (2000).

10. Branzburg v. Hayes, 408 U.S. 665, 794 (1972).

11. Reno v. American Civil Liberties Union, 521 U.S. 844, 870 (1997).

12. Charles L. Black Jr., "He Cannot Choose but Hear: The Plight of the Captive Auditor," Volume 53, Columbia Law &view (1953), 960.

CHAPTER 4 -- Making Beautiful Music, Together

1. "Mandela Steals the Show from Live 8 Rockers," Cape Argus (Cape Town), 4 July 2005. Available at http://www.iol.co.za/index.php?secid= I &click_ id= 126&art_id=vn20050704112543593CI57427.

2. Telephone interview with Ali Partovi, 6 July 2005.

3. Walter Mossberg, "Podcasting Is Still Not Quite Ready for the Masses," Wall Street Journal, 6 July 2005, D5.

4. Lawrence Lessig, "The Same Old Song," Wired, July 2005, 100.

5. Jesse Walker, "Free Your Radio: Three Liberties We've Lost to the FCC" Reason, December 2001. Available online at http://www.reason.com/0112/ cr.jw.radio.shtml.

6. Available online at http://www.diymedia.net/archive/0703.htm#071103.

7. James Plummer, "Real Media Reform," TechCentralStation, 20 June 2003. Available online at http://www.techcentralstation.com/062003F.html.

8. For more on this, see J.D. Lasica, Darknet: Hollywood's War Against the Digital Generation (Wiley, 2005).

CHAPTER 5 -- A Pack, Not a Herd

1. Galt's original website is at http://www.geocities.com/johnathanrgalt/; the newer version of his movement, Internet Haganah, is at http://haganah.us/ haganah/index.html.

2. John Hawkins, "An Interview with Jon David." Available online at http:// http://www.rightwingnews.com/interviews/jondavid.php.

3. Hawkins.

4. Brad Todd, "109 Minutes," originally published on FrankCagle.com. Available online at http://web.archive.org/web/20041 0 I0 182414/http:/ / 216.111.31.12/details.asp?PRID=32.

5. Richard Aichele, "A Shining Light in Our Darkest Hour," Professional Mariner, December/January 2002. Available online at http://www.fireboat. org/press/proCmariner_jan02_I.asp.

6. Mark Steyn reports one example of missing some pretty obvious warning signs:

With hindsight, the defining encounter of the age was not between Mohammed Ana's jet and the World Trade Center on September 11, 2001, but that between Mohammed Atta and Johnelle Bryant a year earlier. Bryant is an official with the US Department of Agriculture in Florida, and the late Atta had gone to see her about getting a $US650,000 government loan to convert a plane into the world's largest cropduster. A novel idea.

The meeting got off to a rocky start when Atta refused to deal with Bryant because she was but a woman. But, after this unpleasantness had been smoothed out, things went swimmingly. When it was explained to him that, alas, he wouldn't get the 650 grand in cash that day, Atta threatened to cut Bryant's throat. He then pointed to a picture behind her desk showing an aerial view of downtown Washington-the White House, the Pentagon, et al -- and asked: "How would America like it if another country destroyed that city and some of the monuments in it?"

Fortunately, Bryant's been on the training course and knows an opportunity for multicultural outreach when she sees one. "I felt that he was trying to make the cultural leap from the country that he came from," she recalled. "I was attempting, in every manner I could, to help him make his relocation into our country as easy for him as I could."


Mark Steyn, "Mugged by Reality?" The Australian, 25 July 2005. Available online at http://www.theaustralian.news.com.au/common/ story_page/0.5744,16034303%5E7583,00.html. Even government employees are likely to be more sensitive to the warning signs nowadays.

7. Jim Henley, "Unqualified Offerings," http://www.highclearing.com/ uoarchives/week_2002_10_20.html#003796. Henley is quoting an anonymous bystander.

8. Colby Cosh, ColbyCosh.com, available online at http://www.colbycosh.com/ 0Idloetober02.html#sscd.

9. Kathleen Tierney, "Strength of a City: A Disaster Research Perspective on the World Trade Center Attack," Social Science Research Council. Available online at http://www.ssrc.org/sept11/essaysltierney.htm. See also Monica Schoch-Spana, "Educating, Informing, and Mobilizing the Public," in Barry S. Levy and Victor Sidel, Terrorism and Public Health: A Balanced Approach to Strengthening Systems and Protecting People (Oxford University Press, 2003), 118. (Describes spontaneous organization in response to 9/11 attacks and recommends strategies to encourage such responses in the future).

10. Tierney; Schoch-Spana.

11. David Brin, "The Value-and Empowerment--of Common Citizens in an Age of Danger." Available online at http://www.futurist.com/portal/future_ trends/david_brin_empowerment.htm.

12. J. B. Schramm, "The Best Anti-Terror Force: Us," Washington Post, 23 June 2004, All. Available online at http://www.washingtonpost.com/wpdyn/ articles/A624 542004 Jun22.html.

13. Jeff Cooper, Principles of Personal Defense (Paladin, 1989).

14. Sara Miller, "In War on Terror, an Expanding Citizens' Brigade," Christian Science Monitor, 13 August 2004. Available online at http://www.csmonitor. com/2004/0813/p01s02ussc.html.

15. The homepage is at http://www.americaswaterwaywatch.org/index.htm.

16. The homepage is at http://public.afosi.amc.af.mil/eaglelindex.asp.

17. The homepage is at http://www.highwaywatch.com/.

18. Lisa Zagaroli, "Nation's 3 Million Truckers Enlist in War on Terrorism," Detroit News, 5 June 2002. Available online at http://www.detnews.com/2002/nation/ 0206/05/a05506969.htm.

19. Neil Samson Katz, "Amateur Astronomers Help NASA Find Killer Asteroids," Columbia News Service, 5 April 2004. Available online at http:// http://www.jrn.columbia.edulstudenrwork ... 05/664.asp.

20. Katz.

21. S. M. Stirling, Dies the Fire (Roc, 2004).

22. The homepage is at http://www.legionxxiv.org/Default.htm.

23. The homepage is at http://albionswords.com/armor/roman/lorica.htm.

24. Allan Breed, "French Quarter Holdouts Create 'Tribes,''' Associated Press, 4 September 2005. Available online at http://www.wwltv.com/sharedcontent/nationworld/ katrina/ stories/090405cckatrinajrfrenchquarter.26851646.html.

25. This didn't get much press attention, but Houston blogger John Little posted a report with photos. It's available online at http://www.blogsofwar.com/looters_ strike_in_advance_of_rita.

CHAPTER 6 -- From Media to We-dia

1. Eric Hoffer, The Ordeal of Change (Harper & Row, 1963), 109.

2. Zeyad's original blog post can be found at http://healingiraq.blogspot. com/ archives/2003_12_0 l_healingiraq_archive. html# 1071 07940577248 802. A blog report from another Iraqi blogger can be found at http:// iraqthemodel. blogspor.coml2003_12_0 l_iraqthemodel_archive.html# 10 7107057634357719.

3. Pro-democracy rallies in Iraq and more. week~ Standard, 22 December 2003. Available online at http://weeklystandard.com/Content/Public/ Articles/000/000/003/494vhvue.asp.

4. Wagner James Au, "Silence of the Blogs: Why Did the New York Times Ignore Baghdad Blogger Announcements and Accounts of a Big Pro-Democracy Demonstration?" Salon.com, 23 January 2004. Available online at http://www. salon.com/tech/feature/2004/0 1/23/baghdad_gamer_two/index_np.html.

5. Kennedy School of Government, Case Study No. C-14-04-1731.0, '''Big Media' Meets 'The Bloggers': Coverage of Trent Lott's Remarks at Strom Thurmond's Birthday Parry," http://www.ksg.harvard.edu/presspol/ Research_ Publications/Case_Studies/1731_0.pdf. See also Howard Kurtz, "Why So Late on Lott?" Washington Post, 10 December 2002, http://www. washingtonpost.coml ac2/wp-dyn?pagename=article&contentld=A34186- 2002Dec10&notFound=true; Noah Schachtman, "Blogs Make Headlines," Wired News, 23 December 2002.("It's safe to assume that, before he flushed his reputation down the toilet, Trent Lott had absolutely no idea what a blog was.")

6. The original of this now-famous saying is available online at http:// web.archive.org/web/20011214072915/http://kenlayne.com/2000/2001_12_09_logarc.html.

7. James C. Bennett, "The New Reformation?" Available online at http://www.upi. com/view.cfm?StoryID=281220010507337164r.

8. See generally James Fallows, Breaking the News: How the Media Undermine American Democracy (Pantheon Books, 1996); Andrew Kreig. Spiked: How Chain Management Corrupted Americas Oldest Newspaper (Peregrine Press, 1987); Ben Bagdikian, The New Media Monopoly (Beacon Press, 2004).

9. Kennedy School of Government; pro-democracy rallies in Iraq and more, supra.

10. Available online at http://jimtreacher.com/archives/001281.html.

11. Alex Beam, "Standing Alone against Apple," Boston Globe, 24 May 2005. Available online at http://www.boston.com/news/globe/living/articles/2005/ 05/24/standing_alone_against_apple/.

12. See Robert Pierre and Ann Gerhart, "News of Pandemonium May Have Slowed Aid: Unsubstantiated Reports of Violence Were Confirmed by Some Officials, Spread by News Media," Washington Post, 5 October Z005, A08. Available online at http://www.washingtonpost.com/wpdyn/content/ article/2005/10/04/AR2005100401525.html; Matt Welch, "Echo Chamber in the SuperDome," Reason.com, 4 October 2005, http://www.reason. com/links/links100405.shtml.

13. Garrett Hardin, "The Tragedy of the Commons," 162, Science, 1243 (1968).

14. Nick Denton, "Comments and Communities," Nickdenton.com, http://www. nickdenton.org/archives1004219.html.

15. Jeff Jarvis, "Exploding Porn," Buzzmachine.com, http://www.buzzmachine. com/archives/2004_10_22.html#008254.

16. Jonathan Peterson, "Breaking Down Peter Chernin's Comdex Keynote," Way.nu.http://www.way.nu/archives/000493.html.

17. Daniel Lyons, "Attack of the Blogs," Forbes.com, 14 November 2005. Available online at http://www.forbes.com/forbes/2005/1114/128.html.

18. Dan Gillmor, we the Media (O'Reilly, 2004).

19. Joe Trippi, The Revolution Will Not Be Televised: Democracy, The Internet, and the Overthrow of Everything (Regan Books, 2004).

20. Hugh Hewitt, Blog: Understanding the Information Revolution That's Changing Your World (Nelson Books, 2005).

INTERLUDE -- Good Blogging

1. Available online at http://web.archive.org/web/20021113004102/http: //www.lileks.com/bleats/archive/02/1002/100202.html.

CHAPTER 7-Horizontal Knowledge

1. The Hephthalite, or "White" Huns, ruled Central Asia in the fifth and sixth centuries, until they were exterminated by the Persians. For more information, visit http://www.silkroad.com/artl/heph.shtml.

2. The rocket equation tells how high a rocket can fly and how great a velocity it can achieve, given its exhaust velocity, fuel, etc. For more information, visit http://web.media.mit.edu/-sibyl/project ... ocket.html.

3. As I write this, Biden has received $75,150 from the TV/movies/music industries for the 2006 election cycle. More information is available at http://opensecrets.org/politicians/indu ... cycle=2006.

4. That's actually true. I looked up all these things in under five minutes total while writing this. At least so long as "draw me a beer" means "draw me a beer and bring it to my table."

5. Nick Denton, "Organizational Terrorism," Nickdenton.org, http://www. nickdenton.org/ archives/006004.html#006004.

6. William J. Broad, "At Los Alamos, Blogging Their Discontent," New York Times, 1 May 2005.

7. JoAnn S. Lublin, "The Open Inbox," Wall Street Journal 10 October 2005, B1. Available online at http://online.wsj.com/public/article/SB112890006 139064049PNxxU56QuvTOicPmJSXQnDrVmn8_20061010.html? mod=blogs. Excerpt: "Technology has really made this staff dialogue possible," observes Henry A McKinnell Jr., CEO of New York-based Pfizer Inc., the world's largest drug maker. While being driven to meetings, the 62-year-old executive reports, "I don't look out the window. I use my BlackBerry and answer my email." He calls the roughly seventy-five internal emails he gets every day "an avenue of communication I don't otherwise have." He adds, "I really consider this an early-warning system." I think he's right to look at it that way.

8. Julia Scheeres, "Pics Worth a Thousand Protests," Wired News, 17 October 2003, http://wired-vig.wired.com/news/culture ... -2.00.html? tw=wn_story _page_next1.

9. Jesse Walker, "Is That a Computer in Your Pants? Cyberculture Chronicler Howard Rheingold on Smart Mobs, Smart Environments, and Smart Choices in an Age of Connectivity," Reason.com, April 2003. Available at http://www.reason.com/0304/fe.jw.is.shtml.

10. Clive Thompson, "On the Media," WNYC, 20 December 2002. Transcript available at http://www.onthemedia.org/transcriptslt ... ts_122002_ mobs.htm\.

CHAPTER 8 -- How the Game Is Played

1. "Violent Video Games under Attack," Wired News, 4 July 2004, http://wired.com/news/games/0.2101.6410 ... _tophead_3. 2. See her website at http://www.violentkids.com for more information.

3. James Dunnigan, "Troops Game Their Way out of Ambushes," StrategyPage.com, 5 July 2004, http://www.strategypage.com/dls/articles/200475.asp.

4. Frank Vizard, "Couch to Combat: A Popular Computer Game Called America's Army' Has Evolved into a High-Tech Tool for Training Today's Soldiers," Popular Mechanics, June 2005, 80.

5. Dave Kopel and Glenn Reynolds, "Computer Geeks and War," NationalReview.com, 1 October 2001, http://www.nationalreview.com/kopell kopel100101.shtml.

6. Andrew Leonard, "Gun Mad," Salon.com, 18 April 1998, http://archive. salon.com/21st/feature/ 1998/04/ cov_20feature2.html.

7. B. H. Liddell Hart, Strategy (Praeger, 1967).

8. See James Glassman, "Good News! The Kids Are Alright," TechCentralStation. com, http://techcentralstation.com/071604E.html. (Summarizes results of National Youth Survey and related studies.)

9. It's dangerous to make too much of any one study, of course, and studies of sexual behavior -- and in particular teen sexual behavior -- are probably less trustworthy than most. Another study suggests that teens are having more oral sex-which may account for the lowered pregnancy rates. See National Center for Health Statistics, "Sexual Behavior and Selected Health Measures: Men and Women 15-44 Years of Age," 2002. Available online at http://www.cdc.gov/nchs/products/pubs/p ... /ad362.htm. See also Laura Sessions Stepp, "Study: Half of Teens Have Had Oral Sex," Washington Post, 16 September 2005, A07. Available online at http:// http://www.washingtonpost.com/wp-dyn/ content/article/2005/09/15/AR2005091500915.html.

On the other hand, perhaps online porn -- which often emphasizes oral sex -- is behind this change as well. While some may feel that oral sex without pregnancy is no improvement over traditional sex with the risk of pregnancy, I suppose I regard this substitution, to the extent it's genuine, as some degree of progress. At any rate, there seems to be no disagreement about the decline in the pregnancy rate, regardless of cause.

CHAPTER 9 -- Empowering the Really Little Guys

1. Richard P. Feynman, There's Plenty of Room at the Bottom, ed. Horace D. Gilbert (1961), 295-96.

2. On the artificial kidneys, see "Nanotechnology Used to Help Develop Artificial Kidney;" ABC News Online, http://www.abc.net.au/news/newsitems/ 200509/ s1461541.htm.

3. Information on the National Nanotechnology Initiative can be found at its website, http://www.nano.gov -- but information on classified Defense Department work is, of course, classified.

4. Robert J. Freitas, Nanomedicine, Volume I: Basic Capabilities (Landes Bioscience, 1999). See also Robert J. Freitas, Nanomedicine, Volume IIA: Biocompatibility (Landes Bioscience, 2003). On enhanced cognition, see Kelly Hearn, "Future Soldiers Could Get Enhanced Minds," UPI, 19 March 2001, LexisNexis Library, UPI File (describing planned use of nanotechnology to enhance soldiers' cognition and decision-making under stress).

5. National Science and Technology Council (2004), available online at http:// nano.gov/nni04_budget_supplement.pdf.

6. National Science and Technology Council, 27.

7. National Science and Technology Council.

8. National Science and Technology Council, 33.

9. For a summary of this debate, see Judith P. Swazey, et al., "Risks and Benefits, Rights and Responsibilities: A History of the Recombinant DNA Research Controversy," Volume 51, Southern California Law Review (1978), 1019.

10. Available online at http://www.dnafiles.org/PDFs/therapy.pdf.

11. See David Whitehouse, "First Synthetic Virus Created," BBC News, 11 July 2002. Available online at http://news.bbc.co.uk/2/hi/science/nature/ 2122619.stm.

12. Available online at http://www.greenpeace.org.uk/MultimediaFiles/Live/ FullReport/5886.pdf.

13. Available online at http://www.greenpeace.org.uk/MultimediaFiles/Live/ FullReport/5886.pdf.

14. Howard Lovy, Nanobot blog, http://nanobot.blogspot.com/2003_07_20_ nanobocarchive.html#105905157013774164.

15. Testimony of Dr. Vicki L. Colvin, director, Center for Biological and Environmental Nanotechnology (CBEN), and associate professor of chemistry, Rice University, Houston, Texas, before the U.S. House of Representatives Committee on Science, in regard to "Nanotechnology Research and Development Act of2003," 9 April 2003. Available online at http://www.house.gov/science/hearings/f ... colvin.htm.

16. Ian Bell, "Upgrading the Human Condition," Sunday Herald (Glasgow), 1 August 2004. Available online at http://www.sundayherald.com/43701.

17. "China's Nanotechnology Patent Applications Rank Third in World," InvestorIdeas.com, 3 October 2003, http://www.investorideas.com/ Companies/Nanotechnology/ Articles/China'sNanotechnology 1003,03.as. See also Dennis Normile, "Chinas R&D Power, Truth about Trade & Technology," 2 September 2005, http://www.truthabouttrade.org/ article.asp?id=4364. ("Ernest Preeg, senior fellow in trade and productivity for the Manufacturers Alliance/MAPI, warns in his just released book, The Emerging Chinese Advanced Technology Superstate (jointly published by the Manufacturers Alliance/MAPI and the US Hudson Institute in June 2005) that 'China is right up there with the US in nanotechnology and coming on strong in biotech and in genetically modified agriculture."')

18. "Indian Scientists Should Make Breakthrough in Nano Technology: Kalam," IndiaExpress.com, 1 July 2004, http://www.indiaexpress.com/ news/ technology/20040701-0.html.

19. Daniel Headrick, The Tools of Empire: Technology and European Imperia/ism in the Nineteenth Century (Oxford University Press, 1981).

20. Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (Viking, 2005), 415.

21. Kurzweil.

CHAPTER 10 -- Live Long-and Prosper!

1. Robert Fogel, The Escape from Hunger and Premature Death, 1700-2100: Europe, America, and the Third World (Cambridge University Press, 2004), 40. 2. Richard A Miller, "Extending Life: Scientific Prospects and Political Obstacles," in Stephen G. Post and David Binstock, eds., The Fountain of Youth: Cultural, Scientific, and Ethical Perspectives on a Biomedical Goal (Cambridge University Press, 2004), 228-29.

3. Gemma Casadesus, et al., "Eat Less, Eat Better, and Live Longer: Does It Work and Is It Worth It? The Role of Diet in Aging and Disease," in The Fountain of Youth, 201, 203-4.

4. Casadesus, 235.

5. Jonathan Swift's "struldbrugs" lived a very long time, but aged all the while, with deeply unfortunate results. See Jonathan Swift, Gulliver's Travels, ed., Paul Turner (Oxford University Press, 1998), 199-206.

6. Robert Arking, "Extending Human Longevity: A Biological Probability," in The Fountain of Youth, 177, 191-92.

7. Arking, 192-93.

8. Aubrey D.N.J. de Grey, "An Engineer's Approach to Developing Real Anti- Aging Medicine," in The Fountain of Youth, 249.

9. Leon Kass, "L'Chaim and Its Limits: Why Not Immortality?," in The Fountain of Youth, supra note 2, at p. 304, 309, 312.

10. Centers for Disease Control, "Ten Great Public Health Achievements: United States, 1900-1999," Volume 48, Morbidity and Mortality Weekly Report (1999), 241. Available at http://www.cdc.gov/epo/mmwr/preview/ mmwrhtml/00056796.htm.

11. Karen Wright, "Staying Alive," Discover, November 2003, 11.

12. S. Jay Olshansky, Leonard Hayflick, and Thomas Perls, "Anti-Aging Medicine: The Hype and the Reality-Part I," Volume 59, J. Gerontology: Biological Sciences (2004), 513.

13. Gregory Stock and Daniel Callahan, "Point-Counterpoint: Would Doubling the Human Life Span Be a Net Positive or Negative for Us Either as Individual or as a Society?" Volume 59, J. Gerontology: Biological Sciences (2004), B554, B558. ("[T]o run a society, you have to both say no to people and to require people to do what they don't want to do. There are some higher goods than what we personally want.")

14. Stock and Callahan, 557: "[W]e could get a pretty good sense of likely possibilities based on our present experience. For instance, I've become interested in universities: What happens now in universities that don't have mandatory retirement? First of all, some people stay beyond seventy, between 5 percent and 10 percent in the universities I've looked at.... Most importantly, they block the entry of young people onto the faculty."

15. On the abolition of mandatory environment, both within and without the academic world, see Pamela Perun, "Phased Retirement Programs for the Twenty- First Century Workplace," Volume 35, John Marshall Law Review; (2002), 633.

16. Perun, 559.

17. 539 U.S. 558 (2003).

18. 381 U.S. 479 (1965).

19. Douglas Clement, "Why 65?" FedGazette, March 2004, http://minneapolisfed. org/pubs/fedgaz/04-03/65.cfm.

20. See, for example, Alan Greenspan, "U.S. Must Pare Retirement Benefit Promises," Washington Post, 29 February 2004, A3. ("Greenspan again recommended gradually raising the eligibility age for both Medicare and Social Security, to keep pace with the population's rising longevity.")

21. Sebastian Moffett, "For Ailing Japan, Longevity Begins to Take its Toll," Wall Street Journal II February 2003, AI. See also Phillip Longman, "The Coming Baby Bust," Foreign Affairs, May/June 2004, 64.

22. Longman, 64.

23. Ronald Bailey, Liberation Biology: The Scientific and Moral Case for the Biotech Revolution (Prometheus Books, 2005), 242.

24. Bailey, 18.

25. Bailey, 132.

CHAPTER 11 -- Space: It's Not Just for Governments Anymore

1. Webb Wilder, "Rocket to Nowhere," Acres of Suede (Watermelon Records, 1996).

2. Holman W. Jenkins, "NASA's Coming Crackup," Wall Street Journal 5 October 2005, A21. Available online at http://online.wsj.com/article_print/ SB112847638707060287.html.

3. NASA Contests and Prizes: How Can They Help Advance Space Exploration, Hearings before the Subcommittee on Space and Aeronautics, Committee on Science, U.S. House of Representatives, 15 July 2004 (testimony of Peter Diamandis). Available online at http://commdocs.house.gov/committees/ science/hsy94832.000/hsy94832_ 0.htm.

4. Alan Boyle, "NASA Announces Prizes for Space Breakthroughs," MSNBC.com, 24 March 2005, http://msnbc.msn.com/id/7280483/.

5. For more on space elevator technology, see Bradley Carl Edwards, "A Hoist to the Heavens," IEEE Spectrum, 21 August 2005, http://www.spectrum. ieee.org/aug05/1690.

6. Pub. L. 100-685, Title II § 217, 102 Stat 4094 (1988), codified at 42 USC §2451 (2000).

7. Kucinich's bill is discussed in Glenn Harlan Reynolds, "Moonstruck," TechCentralStation.com, 25 September 2002, hnp://www.techcentralstation. com/092502A.html.

8. Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies (1967), 18 UST 2410 (1969).

9. National Research Council, "Task Group on Issues in Sample Return, Mars Sample Return: Issues and Recommendations: The Significance of Martian Meteorites," available at http://www.nap.edu/books/0309057337/ html/17.html.

10. Martyn Fogg, Terraforming: Engineering Planetary Environments (SAE International, 1995).

11. Robert Zubrin, Entering Space: Creating a Spacefaring Civilization (Tarcher, 1999), 227.

12. Robert Pinson, "Ethical Considerations for Terraforming Mars," 32, Environmental Law Reporter, 11333, 11341 (2002).

13. John A. Ragosta Jr. and Glenn H. Reynolds, "In Search of Governing Principles," Volume 28, jurimetrics: journal of Law, Science, and Technology (1988), 473.

14. William Wu, "Taking Liberties in Space," Ad Astra, November 1991,36. This point is reinforced by recent movies, such as Outland and Total Recall, that depict life in space colonies as harshly controlled.

15. "Governance in Space Project, Declaration of First Principles for the Governance of Space Societies," reprinted in Glenn H. Reynolds and Robert P. Merges, Outer Space: Problems of Law and Policy (Westview Press, 1997), 401-2.

16. Andrew Lawler, Lessons from the Past: Toward a Long-Term Space Policy, in Lunar Bases and Space Activities of the Twenty-First Century (Lunar & Planetary Institute, WW Mendell ed., 1985), 757, 762-63.

17. George Robinson and Harold White, Envoys of Mankind: A Declaration of First Principles for the Governance of Space Societies (Smithsonian Institute, 1986).

18. Bob Zubrin, "The Significance of the Martian Frontier." Available online at http://www.newmars.com/archives/000026.shtml.

19. George Dyson, Project Orion: The True Story of the Atomic Spaceship (Henry Holt & Co., 2002).

20. In addition, the 1957 Pascal-B underground nuclear test accidentally launched a manhole cover at speeds that may have exceeded escape velocity, though it isn't clear whether Orion researchers knew about this. The story of this test, often misnamed "Operation Thunderwell," which was actually the name of another nuclear-spacecraft project, has sparked many Internet legends.

21. Dyson, Project Orion.

22. For Freeman Dyson's firsthand account, see "Saturn by 1970" in Freeman Dyson, Disturbing the Universe (Harper & Row, 1979), 107.

23. Dyson, Project Orion, 119.

24. Multilateral Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space, and Under water (1963), 14 UST 1313 (1963). For more on this, see Glenn H. Reynolds and Robert P. Merges. Outer Space: Problems of Law and Policy, 2nd edition (Westview Press, 1997).

25. Quoted in Jack H. McCall, "The Inexorable Advance of Technology: American and International Efforts to Curb Missile Proliferation," Volume 32, Jurimetrics: Journal of Law, Science, and Technology (1992), 387, 426.

26. McCall.

27. 14 Weekly Comp. Pres. Doc. 1135, 1136 (20 June 1978). ("Purposeful interference with space systems shall be viewed as an infringement upon sovereign rights.")

28. J. Storrs Hall, Nanofuture: What's Next for Nanotechnology (Prometheus Books, 2005), 284.

CHAPTER 12 -- The Approaching Singularity

1. Vernot Vinge, "What Is the Singularity?" Available online at http://www.ugcs. caltech.edu/ ~phoenix/vinge/vinge-sing.html.

2. Joel Garreau, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies, and What It Means to be Human (Doubleday, 2005), 21. For more on this topic, see Ramez Naam, More than Human: Embracing the Promise of Biological Enhancement (Broadway Books, 2005); Ron Bailey, Liberation Biology: The Scientific and Moral Case for the Biotech Revolution (Prometheus Books, 2005); Gregory Stock, Redesigning Humans: Choosing Our Genes, Changing Our Future (Mariner Books, 2003).

3. Erik Baard, "Cyborg Liberation Front: Inside the Movement for Posthuman Rights," Village Voice, 30 July/5 August 2003. Available online at http://www. villagevoice.com/news/0331,baard,45866,l.html.

4. Isaac Asimov, Foundation (Doubleday, 1966), 112.

5. Jonathan Leake, "'Miracle Mouse' Can Grow Back Lost Limbs," Times (London), 28 August 2005. Available online at http://www.timesonline. co.uk/article/0,,2087-1754008,00.html.

6. See Mark Honigsbaum, "Maverick Who Believes We Can Live Forever," Guardian, 10 September 2005. Available online at http://www.guardian. co.uk/print/0,3858,5282378-103690,00.html.

7. "Nanotechnology and Health," Nature, 10 September 2005. Available online at http://www.nature.com/news/2005/050905/ ... 905-2.html. 8. "Diamonds Are Not Forever," PhysicsWeb.org, hnp://physicsweb.org/articles/ news/9/8/16/1?rss=2.0.

9. Ray Kurzweil, "The InstaPundit Interview," InstaPundit.com, 2 September 2005, http://instapundit.com/ archives/025289.php.

10. James Branch Cabell, Jurgen: A Comedy of Justice (IndyPublish.com, 2004), 292.

CONCLUSION -- The Future

I. Joel Miller, Size Matters: How Big Government Puts the Squeeze on America's Families, Finances, and Freedom (Nelson Current, 2006).

2. This topic actually gets some attention from Gene Sperling in his book, The Pro-Growth Progressive: An Economic Strategy for Shared Prosperity (Simon & Schuster, 2005), which calls for empowering individuals as a substitute for restricting markets.

3. Glenn Harlan Reynolds, "Horizontal Knowledge," TechCentralStation.com, 4 June 2003, http://www.techcentralstation.com/060403A.html.

4. Kevin Kelly, "We Are the Web," Wired August 2005. Available online at http:// wired.com/wired/archive/13.08/tech.html.

5. For some extended thoughts on the pluses and minuses of democracy, and its role in American constitutional thought, see Glenn Harlan Reynolds, "Is Democracy Like Sex?" Volume 48, Vanderbilt Law Review (1995), 1635.

6. This is known in some circles as Sturgeon's Law. According to the Wikipedia entry, there are multiple anecdotes regarding the origins of this observation. See "Sturgeon's Law," Wikipedia. Available online at http://en. wikipedia.org/wiki/Sturgeon's_law.

7. Bill Quick, "Book Sales," DailyPundit.com, http://www.dailypundit.com/ newarchives/005081.php#005081.

8. Charles Krauthammer doesn't like that one bit. See Charles Krauthammer, "A Flu Hope, or Horror?" Washington Post, 14 October 2005, A19. Available online at http://www.washingtonpost.com/wp-dyn/ content/article/2005/10/13/AR2005101301783.html.
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Tue Nov 05, 2013 1:34 am

INDEX

A

Acid, 8, 48
Ad Astra, 222
advertisers, advertising, xiv, 12, 42, 262
Afghanistan, xiv
Africa, 52-55
Age of Spiritual Machines, The, 247
AIDS, 164, 240
Air Force (USAF), 230
Airplane, 43
Al Qaeda, 65-68, 79, 83
Albion Arms, 86
Amazon.com, 20-21, 94, 255, 257
American Prospect, 116
Antarctica, 223
Apple Computer, 55, 59, 96
Art of\%r, 145
Artificial Intelligence (AI), 248, 251
ASCAP, 59
Asilomar, 163
Asimov, Isaac, 241
Associated Press, 87, 91
Athens (Georgia), 36
Athens (Greece), 260
Atlantic, 118
Audition, 8, 48

B

Backstreet Boys, 219
Baghdad, 89, 95, 108, 119
Bailey, Ron, 184
Baker, Linda, 36
Barbie, 26
Barnes & Noble, 33
Bashman, Howard, 119
Bay, Austin, 266
bCentral, 56
Beam, Alex, 96
Beatles, The, 219
beer, xi-xiii, xv, 1, 87, 93, 122, 124, 154
Bell, Ian, 169
Belmont Club, 266
Bennett, Jim, 91
Bennett, Ralph Kinney, 18
Biden, Joe, 122, 275
big media, iii, xiv, 63, 90-97, 100-2,
105-7, Ill, 113, 121, 128, 131
Billboard, 56
biotechnology, 9, 163-64, 166, 169,
237, 243, 264
BitTorrent, 47
Black, Charles, 44
Blair, Jayson, 97
Blake, William, 5
blog, blogging, blogger (see also weblog),
x, xiii-xiv, 10-11, 14, 59, 66, 74,
89-95, 98-102, 107-8, 111-13,
115-19, 122, 125-26, 128,
130-33, 136, 146, 167, 190, 244,
250, 255, 259, 262, 264, 266-67
Blog: Understanding the Information
Reformation That's Changing Your
World, 101
Blogger.com, xiii, 115
BMI, 59
Boeing, 195
Books-A-Million, 33
Borders, 30, 32-35
Boston Globe, 96
Bostrom, Nick, 193
Bourbon Street, 87
Brarz, 26
Brin, David, 77
Build-A-Bear, 37-40
Bush, George w., 67, 90--91, 141, 180,
196, 207, 209, 212
Byrd, Geoff, 56

C

California, 54, 163
Callahan, Daniel, 181-82
Cambodia, 91
Cambridge University, 178, 185, 242
Camel Studios, 48
Caplan, Arthur, 165
Carter, 129, 131, 140, 232-33
CB radio, 90, 128-31, 133
CBS, 90, 131
CD, xii, 31, 48, 50, 52, 54, 58, 104
CDBaby.com, 58
Centers for Disease Control, 180
Chargaff, Erwin, 164
China, 2, 26, 170, 181, 226-27, 231-33,
252
Chinese, 5, 170-71, 226-28, 231, 233,
266
Christian Science Monitor, 83
Cialis, 181
Ciarelli, Nicholas, 96
Clark, William, 210, 218
Clarke, Richard, 67-68
Clinton, Bill, 91, 196, 207
Cloud, The, 44
Club Libby Lu, 26
CNN, 110, 112
Co. Operative, 53
Coast Guard, 83
Coffin's Shoes, 22
Cold War, 198, 210, 227
Columbine High School, 140
Colvin, Vicki, 168
Conde Nast, 89
Costco, 20, 23
Crighton, Michael, 155, 157, 167
Cubase, 8, 48-49
cyberspace, 258

D

David, xv, 8, 9, 268
David, Jon, 66, 67
de Grey, Aubrey, 178, 185-93
Decision Games, 143
Denton, Nick, 100, 126, 132
Department of Homeland Security, 83
desktop, 7, 269
Diamandis, Peter, 202
Diamond Age, The, 148, 173
Dies the Fire, 85, 273
Dilbert, 8, 11, 19, 21, 35, 256
Discover (magazine), 180
DNA, 147, 157, 163-65
Douglas, Carol Anne, 33
Drake, David, 143
Drexler, Eric, 155
Drey, Jenna, 56
Duke Nukem, 150, 151
Dungeons & Dragons, 41, 143
Dunnigan, Jim, 140
Dyson, Freeman, 164, 229-30, 232
Dyson, George, 228-29, 233

E

eBay, 11, 19-21, 255-57
Economist, 37
Einstein, Albert, 68, 240
Electronics magazine, 7
Eno, Brian, 48
Environmental Law Reporter, 220
Envoys of Mankind, 224
EPA Science Advisory Board, 159
Escape from Hunger and Premature Death
1700-2000, The, 5
Estrada, Joseph, 134-35
ETC Group, 156
Europe, xiv, 40, 70, 105, 169-71, 225
Excel, 8
Extropy Institute, 190

F

Fab: The Coming Revolution on Your
Desktop-From Personal Computers
to Personal Fabrication, 7
FacesFromTheFront.com, 102, 104, 108
Fake, Caterina, 118
Federal Bureau of Investigation (FBI),
66--67
Federal Communications Commission
(FCC), 59-63, 129, 198-99
Feynman, Richard P., 154
Finland, 118
First Amendment, 41, 44, 97, 149
Flight 93, 69, 71, 73-74, 77, 80
Flint, Eric, 143
Foreign Affairs, 184
Fourth Rail, 266
Fogel, Robert, 5, 175
Fogg, Martyn, 216
Forbes, 101
Ford, Henry, 4, 6, 19
Foresight Institute, 190
FOX, 112
France, xiv
Freitas, Robert J., 155, 158
FriendFinder, 36, 44
Fripp, Robert, 48
Fukuyama, Francis, 240

G

Gabriel, Peter, 134
Galbraith, John Kenneth, 7-8
Gallo, John, 4
Galt, Johnathan, 66
GarageBand, 21, 55-59
Gates, Bill, 55
General Motors, 6
George Washington University, 25
Gershenfeld, Neil, 7, 169
Gibson, William, 10, 268
Gillmor, Dan, 101
GlennReynolds.com, 115-16
GNR (Genetics, Nanotechnology, and
Robotics), 248, 252
Gobel, David, 191
Goliath, xv, 8-9, 228, 258, 268
Google, 122, 124-25, 149, 240
Great Good Place, The, 32
Grand Theft Auto, 151
Great Wall of China, 2
Greenpeace, 166--67
Greenspan, Alan, 183
Griswold v. Connecticut, 182
Grokster, 47
Guderian, Hainz, 145
Guinness Book of World Records, 122, 136
Gulag, 268
Gulf War, 146

H

Hall, J. Storrs, 234
Hart, Sir Basil Henry Liddell, 145
Hasbro, 143
Hawking, Stephen, 205
Headrick, Daniel, 171
Hearst, William Randolph, 91-92
Heinlein, Robert, 193, 207
Hephthalite Huns, 122, 124
Hewitt, Hugh, 101
High Fidelity, 34
Hill, Avalon, 143
Hill, Terry, 48
Hockenberry, John, 165
Hoffer, Eric, 89
Hollister & Co., 29, 31
How Appealing, 119
Huffington Post, 16

I

IBM, 6
Ilay Izy, 53
Illinois High School Association, 259
Immortality Institute, 190
iMovie, 8
India, 51, 53, 171, 231-32, 252, 266
Indian Ocean, 95, 113, 266
Industrial Revolution, 2-3, 5, 9, 13, 17,
153, 160, 262
Inner Circles, 262
InstaPundit, x, xiii, 115, 244
Internet, xii-xiii, xv, 8, 15, 18, 20, 22,
30-31, 36, 39-40, 44, 48, 51-53,
56-59, 62-63, 66-67, 84, 89-92,
97, 110, 113, 117, 121-23,
126-27, 130-33, 135, 146, 149,
153, 165, 197, 199, 259-63, 267
iPod, 57, 59
Iraq, xiv, 89, 95, 102-4, 107-13, 141,
266
Irritable Bowel Syndrome, 240
Its a Wonderful Lift, 53
iTunes, 59

J

Jarvis, Jeff, 11, 16, 18, 20, 89, 100, 269
Jenkins, Holman, 199
Johannes, J. D., 102-3, 105, 107
Johnny White's Sports Bar, 87
Johnson, Charles, 66

K

Kalam, Abdul, 171
Kass, Leon, 180, 184, 190, 192
Katrina, Hurricane, 86-87, 113, 266
Kaus, Mickey, 12, 18, 119, 125
Kelly, Kevin, 257-58
Kennedy, John E, 227, 231-32
Kennedy School of Government, 95
Kerry, John, 91
Kinko's, 18
Kiwi and Nirva Projects, 211
Knoxville, TN, 22, 33, 42
Kohl's, 22
Kopel, Dave, 141
Kowalski, Richard, 84
Kucinich, Dennis, 212
Kurtz, Howard, 126
Kurzweil, Ray, 172-73, 243-44, 246,
248, 250-51

L

LA Times, 112
laptop, 8, 14, 25, 30-32, 35, 104, 158
Lawler, Andrew, 223
Lawrence v. Texas, 182
Layne, Ken, 91
Le Corbusier, 36
Lebanon, 135
Lee, Alvin, 48
Lessig, Larry, 59-60
Lewis, Meriwether, 210, 218
Liam Flavas, 26
Life Extension Foundation, 190
Lileks, James, 117-19
Limited Test Ban Treaty, 230-32
Lindbergh, Charles, 200
LinkExchange, Inc., 56
Live 8, 55
Live Aid, 55
Lloyd's of London, 37
Lockerbie, 69
lorica segmentata, 86
Los Alamos, 131-32
Lott, Trent, xiv, 90, 95
Lotus, 8
Lovy, Howard, 167
Luther, Martin, 92

M

Madagascar, 53
Man in the Gray Flannel Suit, 9
Markoff, John, 128
Mars, 196, 199, 207-21, 223, 225
Marshall, Josh, 119
Marx, Karl, 9
Marxism, 6
McAfee, 173
McCartney, Paul, 58
McKibben, Bill, 240
Medicare, 182-83
Merritt, Jeralyn, 119
Meselson, Matthew, 164
Methuselah Foundation, 191
Methuselah Mouse Prize, 191
Menger, Theresa, 40
Microsoft, 12, 56
Miller, Joel, 257
Miller, Richard, 176
Minor Planet Mailing List, 84
Modigliani, 53
Monster, 49
Moore, Gordon, 7
Mossberg, Walt, 58
Movable type, 115-16
MP3.com, xii, 52-53, 55
MSNBC, 115
Mudville Gazette, 266
music, xii-xiii, 10, 21, 31, 34, 47-50,
52-60, 63, 100, 134, 252, 261

N

Namibia, 53
Nanomedicine, 158
Nanos, Dr. G. Peter, 131
nanotechnology, 9, 15, 153-63, 166-73,
205-6, 234-35, 237-38, 242-43,
245, 248, 255, 264
Napoleon, 98
Napoleonic Wars, 267
Napster, 47, 57
NASA, 84, 196, 198-200, 202, 204-5,
207-9, 211, 213, 230, 266
NASCAR, 5
National Association of Music Merchants
(NAMM), 252
National Guard, 91
National Nanotechnology Initiative:
Research and Development Supporting
the Next Industrial Revolution, 160
National Research Council, 215
National Review Online, 116
National Space Council, 197
Nebraska, 50
Netherlands, xiv
New Haven (Connecticut), 33
New Industrial State, The, 6
New Orleans, 86, 87
New Republic, 116
New York Times, 12, 22-23, 93-95, 111,
125, 128, 131
Nigeria, 53-55, 135
Niven, Larry, 241, 265

O

Off Our Backs, 33
Office Depot, 18, 20
O'Keefe, Sean, 198
Oldenburg, Ray, 32, 34
Olsen, Greg, 208
Online journalism Review, 128
Orion, 211, 228-34
Orteig Prize, 200
Outer Space Treaty, 213-14, 231

P

Panzer Leader, 145
Partovi, Ali, 55-57
Pax, Salam, 119
PayPal, 66
PC (see also desktop), xii, 54, 100-1, 143
Philippines, 134-35
Picasso, 53
Pike, Zebulon, 218
Pink, Dan, 15
Pinson, Robert, x, 220
podcast, 47, 57-60, 63
Poland, 51
pornography, 66, 149-51
Postrel, Virginia, 15, 22-24, 38-39, 119
Pournelle, Jerry, 143
Pravda, xv
Prey, 155, 162
Prince Charles, 169
Project Orion: The True Story o/the Atomic
Spaceship, 228
Protestant Reformation, 91-92
Pruitt, Fred, 118
PSP Audiowar, 51-52
Pyle, Ernie, 108, 112
Pyramids, 2

Q

Quick, Bill, 262

R

Radical Evolution, 239
Raines, Howell, 125-27
Rather, Dan, xiv, 97
Rathergate, 90, 101
Reagan, Ronald, 47, 130, 207
Reason, 116
record labels, xii, 57
recording, xii, 8, 47-48, 50, 52, 55, 57,
59
Red Cross, 81-82
Revolution Will Not Be Televised, The,
101
Robinson, George, 223
Rocky Mountain News, xiv
Rogaine, 185
Romenesko, Jim, 126
Rotary Club, 83
Rutan, Burt, 201, 234

S

Sak's, 26
Salon, xii, 36, 90, 142
Sam's Club, 20, 27
SARS, 266
Saudi Arabia, 66, 84
Scalzi, John, 14
Scarface, 53
Shaheema, Ras, 53
Shoe Warehouse, 22
Shropshire, Philip, 221
Shuttleworth, Mark, 208
Sims, The, 147-48
Simulations Publications Inc. (SPI), 143
Singularity Is Near: When Humans
Transcend Biology, The, 172, 243
Size Matters: How Big Government Puts
the Squeeze on American Families,
Finances, and Freedom, 257
Slashdot, 94, 99
Slate, xii, 12
Sliding Doors, 53
Smith, Adam, 3, 6, 24
Smithsonian Institution, 221-22
Smokey and the Bandit, 129
Snopes.com, 136
Society for Creative Anachronism, 85,
142
Social Security, 182-83
Socrates, David, 103
Soviet Union (U.S.S.R.), xv, 144, 181,
227, 232, 268
Space Settlements Act, 207
Spanish-American War, 91
Spanish Flu, 265
Spiked, xii
Staples, 20
Starbucks, 35
Steele, Richard, 37
Stephenson, Neal, 148, 173
Stirling, Steve (S. M.), 85-86, 143
Stout, Renee, 53
Strategy & Tactics, 143
Sturgeon, Ted, 261
Sun Tzu, 145
SUNY-Stony Brook, 165
Supreme Court, 42-43, 96, 182
Symantec, 173

T

TalkLeft, 119
Target, 22-23
Tascam, 47-48
Tatler, 37
Taylor, Ted, 229
Technorati.com, xiv, 128
Terraforming: Engineering Planetary
Environments, 216
Thompson, Clive, 135
Thurmond, Strom, 90
Times of London, 241
Tito, Dennis, 208
To Rise Again, 53-54
Todd, Brad, 69
TomPaine.com, 116
Transeau, Brian, 49
Treacher, Jim, 95
Trippi, Joe, 101
Tryphonas, Vasilioas, 87
Turtledove, Harry, 143
TV, 24, 40, 45, 56, 60, 101-3, 105-6,
112, 117

U

Uganda, 50, 52
Ukraine, xiv, 119, 135
Ulam, Stanislaw, 229
United Kingdom (UK), 5, 193
United Press International (UPI), 91,
277
United States, iv, 55, 95, 131, 144,
169-70, 180, 207, 211, 222-23,
226-27, 230-32
US Airways, 84
Usenet, 99, 260

V

Varma, Amit, 266
Viagra, 181, 185
Village Voice, 240
Volokh Conspiracy, 119

W

Wall Street Journal, 58, 133, 199
Wal-Mart, 11, 19-20, 23-24, 27
Warner/Chappell Music, 60
Washington D.C., 25, 33, 77, 272
We the Media, 101
Web, 36, 43, 59-60, 65, 67, 82, 92,
94-95, 97-98, 102, 104, 111,
123-24, 136, 200, 239, 257-58,
261-63
weblog, weblogger (see also blog), xiii, 66,
89, 92, 95, 99-100, 115, 126, 131,
262
Weekly Standard, 89
White, Harold, 223
White House, 68, 180, 196-97
Wiesner, Jerome, 227
Wi-Fi, 14, 18, 31, 35-36, 124--25, 240
Wilder, Webb, 195
Windows Movie Maker, 8
Word Press, 115
World Trade Center, 69, 76
World Transhumanist Association, 190
World War II, 16, 92, 96
Wu, William, 222

X

X-Prize Foundation, 200-2, 228, 234

Y

Yale, 33, 240
Yon, Michael, 108-13

Z

Zeyad, 89-90, 95, 119
Zimbabwe, 53
Zubrin, Bob, 209-10, 213, 216, 234
admin
Site Admin
 
Posts: 36119
Joined: Thu Aug 01, 2013 5:21 am

PreviousNext

Return to Glenn Reynolds

Who is online

Users browsing this forum: No registered users and 2 guests