An Army of Davids: How Markets and Technology Empower

An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 3:32 am

by Glenn Reynolds
© 2006 by Glenn Reynolds



To my wife and daughter

Table of Contents:

Introduction: Do It Yourself
1. The Change
2. Small Is the New Big
5. A Pack, Not a Herd
4. Making Beautiful Music, Together
5. A Pack, Not a Herd
6. From Media to We-dia
Interlude: Good Blogging
7. Horizontal Knowledge
8. How the Game is Played
9. Empowering the Really Little Guys
10. Live Long -- and Prosper!
11. Space: It's Not Just for Governments Anymore
12. The Approaching Singularity
13. Conclusion: The Future
Inside and Back Cover


It may be making a difference. At the very least, the fears of the video-game critics seem to be stillborn. American teenagers are doing better than ever, and people are trying to figure our why. Games just might have something to do with it; at the very least, they don't seem to be hurting.

Teen pregnancy is down, along with teen crime, drug use, and many other social ills. There's also evidence that teenagers are more serious about life in general and are more determined to make something worthwhile of their lives. Where just a few years ago the "teenager problem" looked insoluble, it now seems well on the road to solving itself. [8] But why?

Reading about this change, it suddenly occurred to me that I had the answer: porn and video games. That's what's making American teens healthier!

It should have been obvious. After all, one of the great changes in teenagers' social environments over the past decade or so has been far greater exposure to explicit pornography, via the Internet; and violence, via video games. Where twenty or thirty years ago teenagers had to go to some effort to see pictures of people having sex, now those things are as close as a Google query. (In fact, on the Internet it takes some small effort to avoid such pictures.) Meanwhile, video games have gotten more violent, with efforts to limit their content failing on First Amendment grounds.

But, despite continued warnings from concerned mothers' groups, teenagers are less violent, and -- according to some, if not all, studies -- they're having less sex, notwithstanding the predictions of many concerned people that such exposure would have the opposite effect. More virtual sex and violence would seem to go along with less real sex and violence; certainly with less pregnancy and violence. [9]

The solution is clear -- we need a massive government program to ensure that no American teenager goes without porn and video games. Let no child be left behind! Well, no. Not even I'm ready to argue for that kind of legislation, though I suppose candidates interested in the youth vote might want to give it a thought. But the real lesson is that complex social problems are, well, complex, and that the law of unintended consequences continues to apply.

When teen crime and pregnancy rates were going up, people looked at things that were going on -- including increased availability of porn and violent imagery -- and concluded that there might be something to that correlation. It turned out that there wasn't. Porn and Duke Nukem took over the land, and yet teenagers became more responsible and less violent.

Maybe the porn and the video games provided catharsis, serving as substitutes for the real thing. Maybe. And maybe there's no connection at all. (Or maybe it's a different one -- the research indicates that teenagers, though safer and healthier, are also fatter -- so perhaps the other improvements are the result of teens sitting around looking at porn and video games until they're too out-of-shape and unattractive for the real thing.) Most likely, the lesson is that -- once again -- correlation isn't causation, despite policy entrepreneurs' efforts to claim otherwise.

Regardless, the fears of the doomsayers have not come to pass. People can continue to claim that psychological research suggests that video games lead to violence and that porn leads to promiscuity, but in the real world the evidence suggests otherwise. So perhaps we should reconsider regulating video games. And we should definitely take claims of impending social doom with a grain of salt. (Hey, while we're at it, why not encourage surfing porn and playing shoot-'em-up games? After all, as the activists say, if it saves just one child, it's worth it!)

More seriously, such a lack of evidence is reason enough not to shut down the virtual worlds that kids are inhabiting. Instead, we may want to look at the lessons they learn. I don't think that Duke Nukem or Grand Theft Auto are particularly harmful, but it would be useful for people to think about ways of making those games teach productive real-world lessons, and I think that can be done without making them uninteresting. The real world is interesting, after all, and it's very, very good at teaching real-world lessons. The advantage of the virtual world is that those lessons can be learned without bloodshed, bankruptcy, or jail. Seems like a good thing to me.


The challenge in coming decades will be to take advantage of the ability for self-organization and horizontal knowledge that the Internet and other communications technologies provide without letting our entire political system turn into something that looks like an email flamewar on Usenet. I think we'll be able to do that -- most people's tolerance for flaming is comparatively low, and in a democracy, what most people tolerate matters -- but things are likely to get ugly if I'm wrong.
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 3:46 am


Technology empowers ordinary people. But other people are the greatest source of empowerment in this world. In working on this project, I was reminded again that I have been very fortunate with regard to the people I have known.

My wife, Dr. Helen Smith, has been a source of encouragement, advice, and support throughout. Even her health problems have served as inspiration for some parts of this book. Likewise other members of my family -- and in particular my parents, who encouraged my interest in technologies and in writing -- have been great sources of help.

Many of the ideas in this book were first worked out in columns at TCS Daily, where Nick Schulz has been an unfailingly supportive and inspiring editor. When Nick first contacted me to solicit a weekly column, I was reluctant, wondering if I had enough ideas to produce a column a week. Now, nearly four years later, it turns out that he was right, and I did.

My dean, Tom Galligan, and the rest of the faculty and staff at the University of Tennessee College of Law have been uniformly positive and encouraging regarding my writing, even when it has veered from the legal-academic to the technological and sociological. At a time when we hear much about the narrow-mindedness and jealousy of the academy, it's worth noting that the University of Tennessee has been a consistently friendly and supportive place for me, despite the fact that my work is politically incorrect from pretty much any and all angles. I have never regretted choosing to join, and to remain on, the faculty.

Thanks too to my research assistants, Matt Lindsay, Josh Phillips, and Erika Roberts, who discovered typos I had missed, offered helpful stylistic advice, which I sometimes took, and located sources I was unable to find. Likewise to faculty secretaries Sean Gunter, Neal Fischer, Michelle Gilbert, Teresa Michael, and Tammy Neff Other invaluable, and sometimes indispensable, help in various forms has come from (in no particular order) Ashley Pope, Jennifer Marks, Brannon Denning, Heidi Henning, Chris Peterson, Eric Drexler, Robert Pinson, Jennifer Coffin, Nick McCall, Ralph Davis, Leigh Griffith, John Ragosta, Doug Weinstein, David McCord, Rob Merges, and counrless readers and emailers via the InstaPundit and TechCentralStation sites. If you want to understand how technology empowers ordinary people, try writing a popular blog for a while, with a published email address. And my agent, Kate Lee of International Creative Management, has been tireless, persistent, and a pleasure to know.

I hope that you enjoy reading this book as much as I enjoyed writing it and that the trend I outline will continue for the foreseeable future.

Knoxville, Tennessee
19 October 2005
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 3:51 am



About fifteen years ago, I started brewing my own beer. Nothing new about that: people have been brewing their own beer for millennia, and my grandfather was reputed to have been a pretty good brewer during Prohibition. But when I started brewing, it was unusual.

It was unusual because rather than brewing their own beer, most people -- almost everyone, in fact -- bought beer from huge brewing companies that made the stuff in giant steel vats. (When they say that it's "beechwood aged," they don't mean it's aged in casks made out of beechwood -- they just throw a few wood chips into the vats for flavor). The industrial beer wasn't bad, exactly, and unlike most of the homebrewed stuff, its quality and flavor were consistent from batch to batch. The problem was that there wasn't much flavor left; in an effort to consolidate brands, to save money, and to appeal to the broadest tastes possible, brewing companies had gradually thinned out their product until a lot of people found it inoffensive, but unsatisfying.

That's why I started brewing my own beer. The beer I brewed was sometimes terrific -- a couple of batches were among the best beer I've ever had -- and sometimes not so great. But it had more character, and it was fun to brew it myself I learned some things about brewing, I got to experiment with different recipes and approaches, and I got to make the kind of beer I wanted, not the kind that someone else wanted to sell me. It was a little bit cheaper, but that wasn't really the point. The point was that I was making something for myself, to suit me.

Lots of people followed suit, and homebrewing went from an unusual hobby to a fairly common pastime. Beer companies -- and beer distributors -- took note, and the range and quality of beer offerings on tap and in stores dramatically improved. In the end, even the non-homebrewers benefited.

A few years later, I started recording my own music. The technology for digital recording had improved enough that making your own professional quality CDs was possible. I recorded various bands in my basement studio, and, after a while, moved most of my own music production and recording onto a PC. I distributed CDs and downloadable music on the Internet (one of my albums made it to number one for several weeks on the charts) and got written lip in places like Salon and Spiked. [1] My brother and I even set up a small record company that, basically, consisted of a few PCs, some microphones, and the Internet. It didn't make us rich, but it did make us a little bit of money. And it made us happy.

Lots of other people were doing the same, and "Indie" music has become an important part of the scene -- to the point that major acts sometimes design their CDs so that they look homemade, gaining them extra street credo Ironically, the CDs that are actually homemade usually look, and sound, as professional as those put out by big labels. I've found a lot of bands that I like through the indie scene, bands that I would have never heard of in the old days. So have a lot of other people.

Then, in what may have been a fateful move, I decided to get into Internet journalism, via what's called blogging today. In the summer of 2001, the now ubiquitous "blogs" were then called "mezines." I had been posting regularly in "The Fray," the online forum of Slate magazine, and decided to strike out on my own using one of my Fray nicknames: InstaPundit. I set up a weblog on the free service and started posting my opinions, along with links to news items, several times a day. (My first post was on digital music, bringing in another of my interests.) I started blogging partly because I teach Internet Law and like to stay active in some sort of Internet activity, and partly because it looked like fun. I had consumed news and opinion journalism for most of my life, but it seemed that much of it had become a bit thin and flavorless (much like the beer). I thought I could produce something that I'd like better and that perhaps some other people would enjoy too.

I figured that if the blog did well, I'd have a few dozen, maybe even a few hundred, readers a day. By September 10, 2001, I had 1,600 and thought I'd hit the big time. The next day, on September 11, I had nearly triple that, and it just took off from there, though the events responsible for that growth took away a bit of the savor.

Lots of other people started blogging shortly thereafter, and you often hear the same reasons given -- basically variations on "I got tired of watching the video of the towers collapsing," and "I got tired of yelling at the TV." Like me, people were unhappy with the mass-market journalistic product and wanted to try making something of their own.

Since then, blogging has exploded in popularity. As 1 write this, is tracking over 22 million blogs -- and bloggers have accomplished a lot in independent journalism: bringing down Trent Lott and Dan Rather; reporting on events in Iraq, Afghanistan, and the Ukraine that Big Media have ignored; and even playing a major role in defeating ratification of the European Constitution in France and the Netherlands. Now bloggers are a normal part of national discourse, featured on TV and quoted in the press. Blogging has become such an important means to reach people that bloggers are courted by advertisers and PR people. Some of the bigger blogs have readerships that rival those of medium-sized newspapers (1get as many as a half-million pageviews some days, and often get more than 250,000, and I get more reader email in a day than The Rocky Mountain News, a top-ten daily, gets in a week).

So what does all this mean ... besides suggesting that professional trendspotters ought to pay attention to my next hobby?

Well, all of these phenomena have something in common: the triumph of personal technology over mass technology. And that's a trend that is going to strengthen over the coming decades.

We're accustomed to thinking that big organizations are the important organizations because that's how it's been in recent centuries. Starting around 1700, big organizations became the most efficient way to do a lot of things. The main sources of power, steam engines and the like, had to be big to be efficient. And keeping track of information required armies of clerks, secretaries, etc., who needed a big organization to support them. These concepts- which economists call "economies of scope and scale" -- favored big organizations. Big companies, big governments, whatever. Mass production. Bigger was better. Goliath rules.

But that phenomenon was the result of technology, and technology keeps changing. We saw a hint of the new world in a bit of dark humor from the days of the old Soviet Union: making fun of its habit of boasting about size, people joked about a Pravda headline reading "The Soviet Microchip: Largest in the World!" The jab was, of course, that big microchips aren't better. Smaller ones are, and that joke presaged the failure of the Soviet approach and the Soviet Union itself.

In fact, those smaller microchips are one of the main factors responsible for the change. New technologies mean that big organizations aren't necessarily more efficient. The growth of computers, the Internet, and niche marketing means that you don't have to be a Goliath to get along. Like David's sling, these new technologies empower the little guy to compete more effectively. They have, in fact, spawned a veritable army of Davids, now busily competing with the Goliaths in all sorts of fields. And, as with the beer, even where that competition is no real threat to the big guys, it tends to push them to do a better job.

In the chapters that follow, we'll look at how technology is empowering ordinary people in all sorts of ways -- from journalism and entertainment, to homeland security and counterterrorism, to manufacturing and scientific research -- and at how it's likely to influence the world in the future. Because in the future, the efforts of individuals and small groups, acting sometimes on their own and sometimes in informal cooperation with others, are likely to make a bigger difference than they've made in centuries.
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 4:09 am


Sherman, set the Wayback Machine for 10,000 BC What does the world look like?

Except for the soon-to-be extinct cave bear or saber tooth tiger here and there, the scale is pretty small. The biggest human organizations are band- and tribe-level: at most a few hundred people, but usually only a few dozen. The line between work and play is pretty blurry. Some things are clearly work, and some things are clearly play, but many are in between, and people go from one to another as circumstances dictate, not according to a schedule. Agriculture hasn't been invented yet, though people brew beer from wild grains and are grasping the concept of reseeding: plants tend to multiply in the same places every year, making it easier to brew beer. (What, you think people invented agriculture for bread?)

The few material possessions that exist are homemade, except for a very small amount of stuff purchased from itinerant traders carrying rare luxuries like amber, obsidian, or dyestuffs. Children aren't sent off to school but hang around the adults as they go about the business of the day. The most dangerous activities, like big-game hunting, are off-limits to the kids, but they grow up quickly and are soon a part of all clan activities.

Even in these caveman days, there's plenty of technology around. Humans are tropical animals, and without technologies like fire and clothing, most of the world would be off limits. Finely wrought flint tools are capable of impressive feats (how do you think those saber tooths and cave bears became extinct?), but there aren't any machines as we'd understand them. Probably the most sophisticated device in general use is the spear thrower. The biggest organized human events are mass hunts and the occasional clan gathering. They're limited in size and duration because you can't feed that many people by hunting and gathering in one place for long, and it's hard to store much food: it goes bad, or it's eaten by vermin.

Fast-forward a few thousand years and not all that much has changed. Advances in agriculture and organization make some difference: more people can live closer together, thanks to the higher efficiency of farming over hunting and gathering (though because farming is hard work, those people are usually less well nourished and harder working than the hunters and gatherers). There's still not much in the way of sophisticated machinery. There are tools a caveman wouldn't recognize, but nothing he couldn't figure out in a few minutes.

Things stay pretty much this way, in fact, until the Industrial Revolution. Agriculture, written language, and a growing facility for procuring and using metals allow big empires to organize large numbers of people, but not very efficiently. Doing things on a large scale is usually less efficient than cottage industry, because coordinating all those people is so much trouble. You can build big things, like the Pyramids or the Great Wall of China, but at enormous cost, and only by making people choose between hauling bricks or being killed. For most of human history, this was the norm.


But the Industrial Revolution changed things. Improvements in organization, communications, and machinery meant that it was often much more efficient to do things on a large scale than on a small one, as Adam Smith observed in his famous description of a pin factory:

A workman not educated to this business ... could scarce, perhaps, with his utmost industry, make one pin in one day, and certainly could not make twenty. But in the way in which this business is now carried on not only the whole work is a peculiar trade, but it is divided into a number of branches, of which the greater part are likewise peculiar trades. One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on is a peculiar business, to whiten the pins is another; it is even a trade by itself to put them into the paper.... Those ten persons, therefore, could make among them upwards of forty-eight thousand pins in a day.... But if they had all wrought separately and independently, and without any of them having been educated to this peculiar business, they certainly could not each of them have made twenty, perhaps not one pin in a day; that is, certainly, not the two hundred and fortieth, perhaps not the four thousand eight hundredth part of what they are at present capable of performing, in consequence of a proper division and combination of their different operations. [1]

Division of labor allowed large groups to be organized in ways that were actually more efficient than smaller groups or collections of individuals acting independently. Big machinery allowed big jobs to be done, bur because the machinery itself was big it could only do big jobs efficiently. When the smallest efficient steam engine is big enough to power a whole factory, it doesn't make sense to use it for anything less: the cost is the same, bur the return is smaller. Thus the "minimum efficient scale" turns out to be pretty big. And lots of capital and lots of time and energy are required to fuel these big operations.

The line between work and play is a lot sharper ill the Industrial Age too. Industrialists like Henry Ford didn't think much of levity:

In 1940, John Gallo was sacked because he was "caught in the act of smiling," after having committed an earlier breach of "laughing with the other fellows," and "slowing down the line maybe half a minute." This tight managerial discipline reflected the overall philosophy of Henry Ford, who stated that "When we are at work we ought to be at work. When we are at play we ought to be at play. There is no use trying to mix the two." [2]

Most of the developments of the nineteenth and twentieth centuries followed this pattern. You can't run a railroad as a family business. The same is true for steel mills (the Chinese Communists tried backyard steel-making, disastrously, with their "little steel" program, but learned better) and, after the very earliest days of the automobile industry, auto factories. Other than a few shops serving NASCAR and very rich car collectors, people don't build cars one at a time any more.

Big organizations doing big things: it's the story of the nineteenth and twentieth centuries. In fact, it was so much the theme of those centuries that it's easy to forget what a departure this was from the rest of human history. But it was a huge departure, brought about by the confluence of some unusual technological and social developments.

And it was a mixed bag. On the one hand, it made people in industrialized countries a lot richer, healthier, and longer-lived. Really, a lot. In his book, The Escape from Hunger and Premature Death, 1700-2100, [3] historian Robert Fogel notes that the improvement in living conditions for the working classes in industrial countries during the Industrial Revolution is without any parallel in human history. Life expectancies got much longer (from thirty-two in 1725 to seventy-six in 1990 in the UK); [4] people got taller, were sick less often, with much better nutrition. The poor of today are much better off than the aristocrats of the pre-Industrial era.


On the other hand, industrialization created a lot of social strain as traditional ways of living were disrupted by new ways of doing business. William Blake's "Dark Satanic mills" weren't as bad as they're remembered today -- if they had been, people wouldn't have flocked to them. Or maybe it's fairer to say that, bad as they were, they were still better than life as a subsistence farmer. But this new industrial world was very different from life on the farm.

Parents and children were separated. Husbands and wives were separated. "Work" became something separate from the rest of life, something fast-paced and foreign. An old-style blacksmith made a plowshare or a sword from beginning to end. A worker in Adam Smith's pin factory, or Henry Ford's automobile factory, performed a single repetitive task with no real connection, emotional or intellectual, to the overall product. Nor -- unlike those old-time craftsmen -- did factory workers have much of a connection to the economics of the business. Although factory workers did much better economically than peasant farmers had done, their share of the proceeds was trivial compared to that of the people who financed and ran these large capital-intensive operations- people who became known as "capitalists."

This divide between workers and financiers led to talk about worker "alienation" and the perceived problematic separation of labor from ownership of the means of production. This was the foundation of Marxism and of efforts -- universally disastrous -- to replace capitalists with government-controlled capital in communist countries. Government replacements for free-market capitalists were, if anything, more rapacious. They were also much worse at actually producing wealth. Much of the twentieth century was spent in making this clear in various unfortunate and lethal ways.

The large-scale operations hit their zenith in the mid-twentieth century, with American business revolving around huge entities like General Motors and IBM. Economists like John Kenneth Galbraith [5] began arguing that big corporations were protected from failure by their size, and that the kind of massive organization and information-processing available to these huge enterprises meant that smaller businesses couldn't possibly compete. Bigger was better, and the power resulting from the managerial class "techno-structure" that ran these big corporations was more important than crude things like profits. Or so the theory went.

This turned out not to be the case. Even as Galbraith's book, The New Industrial State, was appearing in 1966, the seeds of change were taking root. The year before, in the thirty-fifth anniversary issue of Electronics magazine, Gordon Moore had first proposed "Moore's Law" -- essentially saying that computing power was doubling every two years and would continue to do so for the foreseeable future. Giant corporations weren't nimble enough to keep up such a pace.


It was a while before the impact of this trend on Galbraith's formulation became obvious, but the growth of cheap computing power has already undercut the importance of big organizations in many areas. That cheap computing power is now being coupled with cheap manufacturing -- including, increasingly, what Neil Gershenfeld calls "personal fabrication," in his book Fab: The Coming Revolution on Your Desktop -- From Personal Computers to Personal Fabrication. [6] But even without the kinds of progress that Gershenfeld describes, manufacturing, including custom manufacturing, has become cheap and versatile enough to neutralize many of the advantages that large organizations once held.

For activities that, ultimately, are about processing information, the computer revolution itself has drastically reduced the minimum efficient scale. A laptop, a cheap video camera, and the free iMovie or Windows Movie Maker software (plus an Internet connection) will let one person do things that the Big Three television networks could only dream of in Galbraith's day, and at a tiny fraction of the cost.• The same laptop with a soundcard, a couple of microphones, and software like Acid, Cubase, or Audition can replace an expensive recording studio. Change the software to Lotus or Excel and it can replace an office full of Galbraith-era accountants with calculators, pencils, and paper, and access -- filtered through a priesthood of programmers and machine operators -- to big 1960s mainframe computers.

This observation is commonplace now, of course, but its implications for Galbraith-era economics have gotten somewhat less attention. It's nor just that fewer people can do the same work, it's that they don't need a big company to provide the infrastructure to do the work, and, in fact, they may be far more efficient without the big company and all the inefficiencies and stumbling blocks that its bureaucracy and techno-structure tend to produce.

Those inefficiencies were present in Galbraith's day too, of course. People have been making jokes about office politics and bureaucratic idiocies long before Dilbert. But in the old days, you had to put up with those problems because you needed the big organization to do the job. Now, increasingly, you don't. Goliath's strength compensated for his clumsiness. But now the Davids can muscle up without all of the unnecessary bulk.

So why be a Goliath? As technology moves toward smaller, faster, and cheaper approaches to many jobs, we're likely to see an army of Davids taking the place of those slow, shuffling Goliaths. This won't be the end of big enterprises, or big bureaucracies (especially, alas, the latter), but it will represent a dramatic reversal of recent history, toward more cottage industry, more small enterprises and ventures, and more empowerment for individuals willing to take advantage of the tools that become available. We're likely to see a movement from the impersonal, imposed means to an end to a more individualized, grassroots way of doing things. In fact, we're already starting to see that, as many people -- laid off or voluntarily departed from big organizations-start small businesses. Working from home, their daily lives look more like what we saw in the pre-Industrial Revolution era than like the classic Man in the Gray Flannel Suit lifestyle.


One of the most significant consequences of this shift is that the empowerment of individuals may lead to an interesting twist on Karl Marx's goal: workers control the means of production, all right, but it's a far cry from communism. Marx's view was tied to an outdated technological paradigm, but his desired outcome, a world in which "capital" is in the hands of the masses, not just the few, may ironically come about through the technological capitalism that Marx's heirs (though not Marx himself, really) despised.

Technologies that are still on the horizon, like molecular nanotechnology (whose enthusiasts predict will lead to machines that can make anything out of "sunlight and dirt") and biotechnology, may bring this trend to complete fruition, but everyday technologies are already moving us a long way in that direction.

The worker's paradise may turn out to be a capitalist creation after all.

In the coming chapters, I'll look at the way this change is playing out in the worlds of business, media, the arts, and even national security. I'll also look at the downside of empowering individuals: if amateur musicians or bloggers are empowered by technology, so in a different way are terrorists.

Overall, I consider the trend to be a positive one. Whether you agree with that assessment or not, the existence of this empowerment is undeniable and irreversible. Love it or hate it, it's worth close consideration.

As William Gibson has remarked, "The future has already arrived -- it's just not evenly distributed." The pockets of the future that we'll be surveying are not only interesting in themselves, bur provide a look at how d lot more of the world is likely to operate before long.
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 8:46 pm


Awhile back, blogger Jeff Jarvis noted a press release saying that eBay's sellers were threatening to overtake Wal-Mart's employment numbers:

eBay is fast becoming one of the largest employers in America. Of course, it hardly employs anyone, but it enables a lot of people to employ themselves and run their own businesses: 724,000 people are using it as their full- or part-time employment, up 68 percent from a year ago; another 1.5 million use it to supplement their income. Wal-Mart is America's largest employer with 1.1 million workers. Sure, the eBay-self-employed don't have Wal-Mart's crappy benefits and uniforms (if eBay were really smart, they'd institute group health insurance!) but all those folks are their own bosses. As industry gets bigger and bigger, small becomes more and more of an economic force. [1]

Yes, this is something of an apples-and-oranges comparison, but not entirely. And it captures an important point: lots of people don't like their jobs, their bosses, or their offices -- just read any selection of Dilbert comic strips.

What's more, a lot of people responded to the 2000 recession by starting their own businesses. For some it was a case of necessity -- "If I can't get a job, I'll make one!" for others It was a case of being given a push toward something they wanted to do anyway. In fact, quite a few formerly unemployed people arc now reporting that they're self-employed. Though an economist quoted by the New York Times discounts this phenomenon as "involuntary entrepreneurship," [2] it seems likely that-voluntary or otherwise- we'll see a lot more of this son of thing.

As Slate's Mickey Kaus notes: "If we're entering a new economic era-one in which traditional cyclical employers won't start rehiring, ... isn't it likely. even, that workers will adjust by pursuing entrepreneurial opportunities? And if entrepreneurship is real, what does calling it 'involuntary' mean? I might prefer to have a full-fledged 'job' at Microsoft, complete with stock options, health insurance, etc. Instead, I'm d freelance contractor. Calling my entrepreneurship 'involuntary' might be accurate, but it doesn't mean I'm not working and feeding myself. In the 'newer' economy, you'd expect such self-employment to increase, no?" [3]


For whatever reason, many people have decided to join the ranks of the entrepreneurial classes, and technology has made it a lot easier. What's more, a lot of people really want to live that way. If they didn't, I wouldn't see and hear so many advertisements offering people ways to work at home. Sure, the ads are often scams -- but the demand they're responding to is quite genuine.

Before the Industrial Revolution, artisans worked in or alongside their homes, often with children observing and even helping. After the Industrial Revolution, workers were segregated in factories, where specialized facilities took advantage of new technologies and of the economies of scope and scale that those technologies made available. Blacksmiths could make steel or work iron in small quantities, but foundries could do it better, and cheaper.

The results of this shift reverberated through every level of society. Of course, with the workers off at factories learning the kind of skills-like punctuality and the ability to follow orders -- that factories required, something had to be done with the kids. This led to two major changes: women often specialized in childrearing to a much greater extent than previously, when childrearing was just part of the household work; and children were segregated into massive "educational factories" of their own: public schools organized, quite explicitly, to mimic factories and assembly lines, with students envisioned as the products. (What's more, the student-products were designed to be good factory employees themselves.)

And that was mostly a good thing. The techniques of industrialization took precedence because they worked better and faster than the methods they replaced. And that made everyone richer and, overall, freer. The social transformations -- in families, in workplaces, and in neighborhoods -- that came on the heels of these changes, on the other hand, were adopted not because they worked better than what they replaced but were necessary to survive in and accommodate this new work environment.

Now, it may be that things are starring to change. I was struck by this passage from the writer John Scalzi's blog, describing the impact of Wi-Fi on his life, and how it has freed him from depending on his home office:

At the moment, I'm writing this in [my daughter] Athena's room, on the Boor, the computer propped up on my lap; Athena is behind me on her bed making up a Powerpuff adventure. Three weeks ago I would have to be in my [home] office to type this and Athena would be coming in about every six seconds to ask me something or to ask me to do something or whatever, which means I would actually have a difficult time getting work done when she was around; now she's happy to let me work because I have proximity to her. She still asks me questions and such, bur once I've answered she's off on her own thing.

Interestingly, this also works with Krissy [his wife]: she's more content to let me do work if I'm in line of sight. There's a real psychological difference between being in the office all the time, away from the family while I'm doing work, and being in the room, doing work while the family is doing stuff around me. It's useful for me (especially when I'm on deadline, like I am right now), and it's better for the family. [4]

I've noticed much of the same thing in my work. I work at home more often now, thanks to the combination of a laptop computer and wireless Internet. I work allover the house, often sitting in a chair while my daughter plays with dolls or does homework. She spends a lot more time around me than I spent with my dad, and this is one reason why.

It's a mixed bag, of course. You can look at it as getting to spend time with your family while you take care of work, or you can look at it as having to work when you're with your family, and no doubt both perspectives are valid from time to time. But it's certainly better for many kids than the frequent absences required by the much less flexible office job.

I'm not alone in this. Many people are doing the same thing as technology makes it easier to do many kinds of jobs at home. How far we'll move in the direction of what Dan Pink calls a "Free Agent Nation" [5] isn't clear: obviously, some jobs are more amenable to the cottage-industry approach than others. Our neighbors tried running a coffee service from home but met with some neighborly resistance when coffee-bearing semitrailers began backing down the street at all hours. Operating a car-repair business or a blast-furnace out of your home might also pose challenges.

But many jobs will move back home, at least in part. And if you believe, as Virginia Postrel suggests, [6] that more jobs in coming years will have an aesthetic component (which is the son of work that lends itself to a cottage-industry approach), then that trend may accelerate even more. New advances in computer-aided design and manufacturing, along with things like nanotechnology further down the line, may help the trend as well.

How will this change society at large? Schools, of course, will have to adjust to train kids for different career options. But this will just be part of it. The new freedom and flexibility will also change the mix of political issues somewhat: self-employed people tend to hate red tape and taxes (pundits have been predicting a "1099 revolt" for a while, as the percentage of self-employed people grows), but on the other hand, the difficulty of getting things like health insurance when you're not affiliated with a large company (as Jeff Jarvis noted) might make them more amenable to some proposals from the Democrats.


We'll save that speculation for another time, though, because I want to look at some social changes that may come with increasing self-employment and home-based work. The Industrial Revolution, after all, remade our society -- and the boom in white-collar jobs after World War II did it again. Now a new revolution is dawning: How will it change us for good or ill? Here are some thoughts:

Crime: Crime in the suburbs increased once the population of stay-at-home moms was diminished. Neighborhoods had fewer sets of adult eyes around, teenagers got less supervision, and two-career couples were more distracted. Will that change? Likely. "Latchkey" kids are increasingly coming home to a parent who works at home, or whose schedule is irregular enough that his/her absence can't be taken for granted. And irregular schedules mean that thieves can't assume that neighborhoods will be deserted during the day. That's certainly true in my neighborhood, where quite a few of the people are professionals who set their own calendars, and who can often be found mowing the lawn, or lounging by the pool, in the middle of a weekday because they'll be working at night or on the weekend or whenever their schedule best fits.

Family: One of the standard negative depictions from the Gray Flannel Suit era featured a disconnect between the world of work and the world of family. Fathers trudged off en masse to downtown office buildings where they performed inscrutable tasks, from which they returned exhausted and in need of martinis. Kids had little idea what their fathers did; fathers knew little about what their kids did. Husbands and wives moved in different worlds.

The entry of women into the workforce in large numbers has helped this a little, I suppose, but not a lot, especially where the kids are concerned. But kids who get to watch their parents work up close -- the way that kids did in the pre-Industrial Revolution, cottage industry days -- are likely to have a much greater appreciation of how the world of work operates. Perhaps, like the kids in the pre-Industrial Revolution days, they'll mature more quickly as a result, though here I may be overly optimistic. At the very least, however, they'll see work behavior modeled in their presence. Instead of "take your daughter (or son) to work" day, it'll be "take work to your kids" every day. Spouses also tend to know a lot more about the work of their self-employed beloveds, for better or worse. I'm not enough of a sociologist -- or a psychic -- to analyze all the changes that may result from this phenomenon, but I feel pretty confident that many of these significant changes will be for the better.

Nobody was that thrilled with the Gray Flannel Suit era.

Economy: If more people are free agents, working at home or out-and-about rather than in traditional offices, then businesses that provide them with useful services and amenities will flourish. We're already seeing some of that, with businesses featuring amenities like free wireless Internet connections in order to attract "gypsy workers" who aren't chained to offices and who like to combine work with pleasure. (I often write at one or another local establishments offering free Wi-Fi along with other lures, and I've noticed that I'm not the only one.)

Obviously, other businesses catering to the self-employed crowd -- from Kinko's to Office Depot -- are likely to do well too. On a macro level, self-employment will make economic statistics more difficult to decode: instead of the binary distinction between "employed" and "unemployed," we'll have the fuzzier distinction between "good year" and "not-so-good year" that small businesses tend to experience. As the reports from Jeff Jarvis and Mickey Kaus quoted above indicate, this will make it harder to figure out what's going on in terms of employment.

Traffic: Proponents of light rail and other sorts of mass transit tend to portray these systems as the wave of the future. But the "commuter-rail" model assumes the presence of, well, commuters: traditional gray-flannel-suit types who head downtown in flocks, spend a day at the office, and then return home. The driving pattern for work-at-home types is different: lots of quick, parcel-laden errands to different destinations (like Office Depot or Kinko's). It's much harder to design a commuter-rail system that works for people like that. As Ralph Kinney Bennett notes, the automobile's flexibility and independence are unmatched by other forms of transportation. [7]

Politics: This topic deserves a chapter of its own, and I'll come back to it later. But here's one note: people who are self-employed are far more aware that there's no such thing as a free lunch and far more likely to look at the bottom line. As more of the electorate becomes self-employed, this is likely to produce an overall attitudinal shift in politics, over and above any changes in specific policies. Both state and local governments -- now basically organized along a Henry Ford sort of model -- might want to take a lesson from eBay and Wal-Mart and look for ways in which they can help individuals do their own thing more effectively.

Likewise, political parties, and other political organizations designed around old-fashioned industrial approaches to politics, are unlikely to flourish in a new world of fluid coalitions and issue-oriented constituencies. They, too, may want to look more like eBay, and less like Ford, if they are interested in holding on to their members and influence.


Will people miss things about the old-fashioned employment market? Absolutely. Though "job security" under the old system was always a lot less than it appeared (ask any steelworker or airline pilot), the constant need to hustle up new business that successful self-employment requires is a very different way of life. And though big companies are subject to Dilbert-style inefficiencies and stupidities, they take advantage of division of labor in a way that the self-employed can't. On the other hand, most people who are self-employed, in my experience, tend to like it. Most people who work for big organizations don't. So perhaps, overall, job satisfaction will be higher. I hope so. Because, for good or for ill, this is the trend. And I think that it's here to stay awhile.

Jarvis's initial observation about big and small raises some interesting points of its own. It turns out that eBay does make health insurance available to its "Power Sellers" -- basically, people who sell over $1,000 a month for three months and get good customer reviews-on terms that aren't bad. [8] (Wal-Mart's benefits also aren't as bad as Jarvis makes them sound.[9]) It's not the best deal in the world, but it's better than many full-time employers offer, and-unlike, say, auto workers-eBay Power Sellers don't have to worry about being laid off or fired because they've offended a boss. has similar online programs for independent sellers via its zShops affiliates (which let individuals and small companies sell through its website) and its Amazon Associates program, which pays people referral fees for sales by customers they refer to's website. Their PR people were pretty unforthcoming when I asked them for information, but they did tell me that there are hundreds of thousands of people in both programs. No health insurance yet, but that could change.

This really isn't a question of big versus small; the key is to have both working together. It's easier to be small because outfits like eBay are big: eBay's buying power lets it make group insurance policies available to its sellers on terms they'd be hardpressed to equal on their own. And by aggregating lots of minor sellers into one big marketplace, eBay makes it much easier for individuals to make a living buying and selling things via the Internet. Likewise, other big operations like Wal-Mart, Sam's, Office Depot, Staples, and Costco -- which offer low prices, big selections, and support to small businesses -- do the same kind of thing. By being big, they make it easier for other people to be small.

I think that there's a big future in this cooperation between the two. Many people like the idea of being self-employed, especially as technology makes it so much easier. But while you may not want to work for Dilbert's pointy-haired boss, you probably would want Dilbert's health plan. In a way, sites like eBay and Amazon are replacing or "disintermediating" the pointy-haired boss, and all other organizational layers between the people who do the work, and the actual customers. Similarly, music sites like (which I'll discuss in more detail in Chapter Four) are disintermediating the record companies (and producers and A&R people) who sit between the musicians and their audiences.

But they're also re--intermediating by putting themselves in the role formerly occupied by the companies and management. To the extent that they're doing things that traditional companies used to do -- dickering with health insurance companies and providing a trusted reputation that makes customers feel better about dealing with strangers they'll never meet -- they're filling that niche. But they're doing so in a very different way, with very different implications for the economy, and for employment.

The secret to success in big business and politics in the twenty-first century, I think, will involve figuring out a way to capitalize on the phenomenon of lots of people doing what they want to do, rather than -- as in previous centuries -- figuring out ways to make lots of people do what you want them to. The eBay and GarageBand examples are just the beginning. I suspect that more enterprising folks will figure out ways to make money along the same lines.


Another way that small is the new big, of course, doesn't have much to do with the Internet. As people have more money and more stuff, they often become more interested in buying services: purchases that buy time, like a cleaning service, or a certain kind of experience, like a spa retreat. Sometimes those services ate substitutes for goods that people once bought, and sometimes -- and I think this will be the wave of the future -- the services are bound up with the goods themselves. And sometimes when we buy the goods, it's really the service we're after.

I'm a big fan of Virginia Postrel's work, not least because it seems to resonate with things that happen in my everyday life. Not long ago, her New York Times column eerily predicted a weekend shopping expedition of mine with my daughter.

I've bought my ten-year-old daughter countless shoes at big discount places: Target, Kohl's, Shoe Warehouse. When she was little, that was fine. Now that she's older, she's become a bit harder to please. Finding shoes that she likes, shoes that fit well (it's harder to keep her size straight now), is not so easy. So one Saturday we went to Coffin's Shoes, a venerable Knoxville outfit that's been selling shoes the old-fashioned way since the 1920s. A friendly salesman, who had obviously been doing his job for quite a while, measured her feet, listened to her talk about what she liked, had her try on a couple of shoes made on different-shaped "lasts" to get an idea of what she found comfortable, and then disappeared into the back, reemerging with a tower of shoes for her scrutiny.

After about half an hour of individual attention, we departed with two new pairs of shoes that she pronounced "the best shoes ever." And, she reported, they were comfortable. Of course, they cost more than it would have cost to buy shoes -- even the same shoes, if that had been possible -- at Target. But we wouldn't have gotten the service.

Now comes Postrel's column in the New York Times, where she notes that Americans are consuming more services and relatively fewer goods. "Listen to the economic debate carefully, and you might get the idea that the problem with the economy is that Americans just are not materialistic enough," she writes. It's a counterintuitive notion. So how does that square with reality? Pretty simple, really. "We spend too much of our income on restaurant meals, entertainment, travel and health care and not enough on refrigerators, ball bearings, blue jeans and cars.... As incomes go up, Americans spend a greater proportion on intangibles and relatively less on goods. One result is more new jobs in hotels, health clubs and hospitals, and fewer in factories." According to Postrel, between 1959 and 2000 the percentage of income that Americans spent on services jumped from about 40 percent to 58 percent. And, she says, "That figure understates the trend, because in many cases goods and services come bundled together." [10]

In fact, that's what I was really buying at the shoe store: goods and services bundled together. At places like Target, they're unbundled-you get goods, but not much in the way of service. (You get even less service at Wal-Mart or Costco). I bought the shoes at an old-fashioned shoe store. In the process I paid extra for the service, and I got my money's worth.

But that's only part of the story. There's more to this than simply choosing to spend money for a massage instead of a TV As consumers become more interested in the total buying experience, the appeal of Big Box stores -- whose approach consists of giving you much less service in exchange for somewhat lower prices -- may decline; in turn, the appeal of old-fashioned specialty stores, where the salespeople know their products and their customers, may come back.

If people want a "dining experience" more than they want a cheap meal -- and, as Postrel notes, nowadays they often do -- then they're likely to want a shopping experience, not just cheap shoes. And they'll be willing to pay to get it. This won't mean the end of Big Box discounters any more than the desire for dining experiences has meant the end of fast food. (Some Big Boxes, as I've mentioned above, actually facilitate small businesses). But it may mean the reappearance of a certain kind of shopping -- and certain kinds of jobs -- that some people thought the Big Boxes would wipe out forever. And because people can get the basics of life cheaply at places like Wal-Mart, they'll actually have more money available to spend on that sort of shopping where non-basics are concerned.

Services can also replace goods. We tend to treat manufacturing as authentic and services as, somehow, bogus -- not real economic activity. That's a traditional view of service industries harking back to Adam Smith. Manufacturing produces something tangible. The results of services are much less obvious.

Postrel, in fact, almost seems to accept this critique in another column: "By missing so many new sources of productivity, the undercounts distort our already distorted view of economic value -- the view that treats traditional manufacturing and management jobs as more legitimate, even more real, than craft professions or personal-service businesses. Still, more and more people are recognizing that true value can come as much from intangible pleasures as it can from tangible goods." [11]

But services can produce more than just "intangible pleasures." They can displace tangible goods. In fact, even "personal services" like massage therapy can displace goods, as I can attest from personal experience.

When practicing law in Washington back in the 1980s, I was one of the early laptop computer users, and I paid the price. I developed all the usual computer problems: numbness and shooting pains in my wrists and hands, backaches, neckaches, and headaches. My health plan then was the George Washington University HMO. So I got great care at a fancy teaching hospital that, since it could use me as a guinea pig to train medical residents, had no interest in cutting corners on treatments that did me no good at all. I was examined by neurologists, immunologists, occupational medicine specialists, and orthopedists. I had nerve conduction studies and electromyelograms. I was given powerful NSAIDs that upset my stomach but provided little relief. I was tested for lupus, myasthenia gravis, and Lou Gehrig's disease.

Then I went to a massage therapist, who dug her thumb into my back just inside a shoulder blade and asked, "Does this trigger your symptoms?" It did. She prescribed some stretches and exercises, and I got much better.

A pill that gave me an equivalent amount of relief would be considered a "product," and the worker who made it would occupy a "manufacturing job." But the pill would have side effects, and it would come out of a factory that consumed resources and energy, and produced pollution and waste, in a way that a massage therapist doesn't.

So the massage therapist is, in a sense, a replacement for that manufacturing job. What's more, the reason there are more massage therapists now, in part, is that more people can afford them. And more people can afford them because increasing productivity makes manufactured stuff -- computers, clothing, food -- cheaper. So when companies shift to automation or outsourcing to lower their costs, it in fact does help to produce new jobs at home.

To pick another example, consider the manufacture of cheap plastic dolls -- whether Barbies, Bratz, or, God forbid, Liam Flavas (a manpurse-carrying metrosexual consort to the Flava line of urban dolls). My daughter used to spend most of her allowance money on that sort of thing. But more recently she's been spending her money at places like Club Libby Lu, a Sak's franchise where she gets "starlet makeovers" and the like. These cost about as much as a doll but, to my delight, they don't add to the mountain of trash at my house that has grown big enough to worry my garbagemen. Isn't the American economy actually better off when cheap plastic dolls made in China are replaced by services performed at home? Heck, I think it's better even than my daughter's money going toward cheap plastic dolls made in America: there's no environmental damage (except for the tenacious glitter-powder) and no addition to my trash pile. And makeovers are harder to move offshore.

And where, to belabor a point, do we get the money to pay for these services? In part, money is available because technology makes manufactured goods and food cheaper. And as society becomes richer, time and energy are spent doing things rather than in production. It's probably not a coincidence that many services (massage therapy, for example) actually work better on a smaller scale. Somehow I don't think a McMassage or a Wal-Mart Massage Center would do as well. Still, your masseuse may hold prices down by buying equipment at Wal-Mart, and you may be able to afford a massage because you were able to buy a six-pound bag of pasta for $2.29 at Sam's Club. As the big guys get better at being big, it's actually easier for the little guys to stay small. That's a kind of synergy we're likely to see more of.
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 9:45 pm


I've noticed a gradual change in public surroundings over the past few years. Unlike the hard, unappealing settings of traditional retail space (ground rule: "get 'em in, get their money, get 'em out "), more and more stores are being designed to encourage customers to linger.

Some of these transformations are obvious -- the cozy coffee bars and cafes featured by many bookstores, for instance. But the phenomenon has spread to less obvious locales. In the mall near my house, for example, an Abercrombie spin-off called Hollister & Co. features comfortable leather chairs complete with end tables and stacks of magazines. The first time I was there I joked to a salesgirl that I might come back with my laptop and camp out. "People do," she responded. And when I went back a couple of weeks later, the circle of armchairs nearest the cash registers was completely occupied by teenagers with cell phones and PDAs. A conversation with a couple of staffers confirmed that the store was intentionally designed to serve as a "hangout."

And I think this shift in design may be the key to understanding how personal technology has changed us. In the old days, retailers knew that most people squeezed shopping in between the office and home. The goal was to sell as much as possible to people during the small amount of time available. Hence the keep' em moving philosophy. But people live differently now. Lots of people work independently, or part-time, or as telecommuters. The lifestyle is more fluid, in part because technologies like cell phones, laptops, and PDAs allow people to work no matter where they are while also staying connected to family, friends, and colleagues. I see a lot of folks with that kind of personal tech hanging out wherever there's a pleasant setting, checking email, returning calls, or writing. It's work that doesn't quire feel like work.


This fluidity gives retailers and other businesses a different kind of opportunity. Retailers have always tried to sell the idea of a certain lifestyle along with their product: a sweater can become a symbol of social status. But if you become somebody's hangout, you don't just sell the suggestion of a kind of lifestyle, you're selling a particular way of life. If price and selection are the main basis for competition, people can always buy on the Internet; but everyone -- especially teenagers -- will still want a place to go. By becoming a place to hang out, a store can sell both the experience and the goods.

Does it work? Well, I'm writing this on a laptop in a Borders right now, comfortably ensconced upon a leather couch and waiting for the line to thin so I can order a lane. I do a lot of writing here, especially during the summers or on breaks when the university is closed. (And they sell me more books and CDs as a result.) A few years ago, in the pre-laptop, pre-Wi-Fi era, it would have been much more cumbersome and inconvenient to work and hang out simultaneously.

Examples of this trend are ubiquitous. A new public library in my area is breaking the old library taboo against food and installing a luxurious coffee bar of the sort normally found only in chain book superstores. Some malls provide a place for tired moms to chat on their cell phones while their kids romp in elaborate play areas. Health food stores provide welcoming spaces complete with live music and kitchen access. Even many churches in my area feature coffee bars with Wi-Fi.

As the trend has continued, we've started to see all sorts of amenities added: not just comfy chairs and beverage service, but wireless broadband Internet access, fireplaces, books and magazines (already begun at Hollister & Co.), and other furnishings and services designed to keep customers around, comfy, and receptive. Businesses reap rewards in the form of impulse buys and customer loyalty. But everyone enjoys the benefits of an abundance of safe, comfortable places to hang out, something that advocates of "community" were calling for just a few years ago.

People like to go out, and providing inexpensive hangouts may draw more business in a recession than when people are feeling flush. And it may be cheaper too, even when times are good. After all, you can buy a lot of comfy chairs for the price of a single Super Bowl ad slot.

Certainly the prevalence of comfy chairs and hangout-marketing bespeaks an attempt to meet an unfulfilled need for safe and comfortable public spaces. My Borders hangout is a good example -- and it also illustrates how capitalism, combined with personal technology, can promote community.

I have an office with a nice computer, and I have a study at home with a nicer computer. But I often pack up my laptop, or a book that I'm reading, or student papers to grade, and relocate to this third place: somewhere more congenial than the office, less isolated than home.

Others must feel the same way because when I'm tapping away at my laptop, I find myself surrounded by people of all sorts. On a typical day, the place is hopping: tables are filled by students, alternately studying and flirting; a parent drilling a homeschooled child on Babylonian history; one or two road-warrior salespeople catching up on scheduling and messages; a gaggle of Bible-studiers arguing about Job; and a leather-clad cyberpunk youth sitting with his more conventional mother. By now, I know all the regulars by sight, and many by name. We keep up on each others' lives in a casual sort of way.

This third place, of course, is the "Third Place" that sociologist Ray Oldenburg called essential to civilization in his 1989 book The Great Good Place. [1] The third place, Oldenburg observes, must possess the following characteristics: it has to be free or inexpensive, offer food and drink, be accessible, draw enough people to feel social, and foster easy conversation. Oldenburg lamented that such places were disappearing.

Back in 1989, they were. Today, they're not -- and you can thank the much-maligned chain book superstores for this. Certainly when I moved to my upscale Knoxville suburb some years ago there weren't many such places. Nor had there been many in Washington D.C.: the Afterwords Cafe at Kramerbooks was the closest thing, but it didn't really fit the bill. When I lived in New Haven, Connecticut, the famous Atticus Books was like a poor man's Borders: cozy, but no public restrooms. (They've since added them, in the face of competition from the palatial Barnes & Noble-operated Yale co-op down the street.)

Now, within about a mile of each other in my Knoxville suburb, stand three big bookstore/cafe complexes: Borders, Barnes & Noble, and Books-A-Million. All seem to be thriving.

They're doing well because they've identified a need and they're meeting it. You'd think that this would make a lot of people happy -- and, of course, it does, as I can tell just by looking around. But you'd think it would make more than just the customers happy; you'd think that it would please the people who are always worrying about America's need for "community."

In that, however, you would mostly be mistaken. While hostility toward book superstores has receded from its late-1990s peak, it is still very real. Independent bookstores, we are told, are genuine; chain bookstores are all about marketing. Chain bookstores are bad for small presses, bad for communities, and -- as Carol Anne Douglas writes in Off Our Backs -- bad for feminists, whose books apparently can only be bought at "feminist bookstores." [2]

I don't know about the feminists, but small-press sales appear to be up thanks to chain bookstores' larger selection of titles. Communities are surely benefiting from the introduction of pleasant third places where they didn't exist before. And what's more, with the exception of a handful of independents, chain bookstores are better at being third places.

Perhaps this is because independent bookstores traditionally have been run by people who like books. These people generally aren't interested in offering the other amenities that Oldenburg names as important and that superstores provide: coffee shops, big chairs, and live music performances. At many independent bookstores, employees like books better than people and want you to know it -- the bookish version of the music geeks in the book (and movie) High Fidelity. [3] (Small bookstores may not have the money for these amenities, either, though they're nor terribly expensive).

The chains, however, aren't in business for personal gratification. They just want to keep customers coming back.

Want coffee? Got it!

Want a triple mocha latte and handmade fresh sandwiches and salads? Got it!

And, interestingly, the extra traffic that these amenities produce means that chain stores typically can afford a better selection of books than the independents, which is why small presses are benefiting right along with latte-lovers.

Well, no surprise there. That's what capitalism is all about. Funny that it's a dirty word to some people. But put technology and capitalism together, and what we often get is an updated version of the good old days; the changes we associated with technology and capitalism -- fast-food-style uniformity, alienation, and lowest-common-denominator treatment -- were actually products of a particular, and transitional, stage in technology. Now that the technology has changed, so have the economics, and so has the response from business. And it goes way beyond Borders.

As a believer in markets, I think that this trend will eventually find an equilibrium point. As an observer of the current direction of technological change, I think that equilibrium point will be a lot closer to where things were in the eighteenth century than to where they were just a few years ago. And this will be on account of many forces both pushing and pulling the change along. Let's look at these "pushes" and "pulls."


The "push" comes from the office environment. You have almost certainly read Dilbert, and I'm tempted to simply cite the comic strip and say, "Case closed." But there's more to it than that.

Yes, the office environment can be unpleasant, and the commute can be nasty and time-consuming and expensive -- just a few reasons people like to work at home. But working at home has its own problems. It can be hard to maintain the work/non-work boundaries. And who wants to meet with clients in your den?

On the other hand, offices are expensive. I've noticed a lot of small business people in my area giving up their offices and having meetings in public places -- Starbucks, Borders, the public library, and so on. In fact, a real estate agent recently told me that the small-office commercial real estate market is actually suffering as a result of so many people making this kind of move.

The "push" comes from people wanting to get out of offices. But the "pull" comes from the technology that makes it possible, and from businesses' desire to cash in. The existence of personal tech like laptops, PDAs, and cell phones, coupled with Wi-Fi and other technologies that allow Internet access from all over means that you don't need to he at the office nearly as much anymore.

If a home is, in Le Corbusier's words, a "machine for living," then an office is a "machine for working." But nowadays, the machinery is looking a bit obsolescent. the traditional office took shape in the nineteenth century, largely due to new technology. People needed to be close to each other to communicate and make use of services like telegraphs, telephones, and messengers (and later copy and fax machines and elaborate computer equipment). You can pretty much carry all that stuff with you now. And people are doing just that.

Consequently, a market has arisen for places that cater co this more fluid workstyle. Right now we're seeing the early phase of that, with amenities that focus on Wi-Fi and Lures. In time, we're likely to see much more than that. A recent article in Salon by Linda Baker finds that many urban-design types are looking beyond connectivity to interconnectivity. For example, she points to pervasive urban networks that let people access the Web, determine whether their friends are in the area via a tool called FriendFinder, and arrange meetings:

"I can come into downtown Athens [Georgia] with a PDA, send a text message that I'm going to be in Blue Sky Coffee for two hours, then turn it off and put it in my pocket," explains Shamp. "Then when one of my buddies comes into downtown, he can use the WAG zone to find out where his friends are." [4]

Various target groups will get different amenities; business users might like readily available Internet printing, for example, more than friend-finding -- or maybe not. But my guess is that the end result will look more like the eighteenth-century coffee-houses, in which so many of that day conducted their business (Lloyd's of London started in Lloyd's coffee-house), than like the office towers where the twentieth century's men in the gray flannel suits encamped.

In the eighteenth century, the coffee-house was a hotbed of activity: "There," according to British newsweekly The Economist, "for the price of a cup of coffee, you could read the latest pamphlets, catch up on news and gossip, attend scientific lectures, strike business deals, or chat with like-minded people about literature or politics." These coffee-houses even served as offices -- Richard Steele, editor of London's popular periodical, the Tatler, requested that his mail be delivered to his favorite coffee haunt. Londoners would drop in at several coffee-houses to participate in all kinds of conversation. "Regulars could pop in once or twice a day, hear the latest news, and check to see if any post awaited them .... [M]ost people frequented several coffee-houses, the choice of which reflected their range of interests." [5]


I believe this is part of a larger phenomenon. Nineteenth- and twentieth-century technology seemed to favor aggregation, uniformity, and large size. Twenty-first-century technology seems to favor diversity, variety, and small size -- along with a much higher degree of interconnection. From politics to work, from factories to malls, I think there are quite a few revolutions along these lines yet to come, and I think they'll go well beyond comfy chairs.

In fact, they're moving the factories into the malls. Build-A- Bear, a place where I've spent a lot of time, is a good example. My daughter had her birthday recently, and during her party I experienced what I'll call a Virginia Postrel moment. The party was at Build-A-Bear, a place that I thought was sure to go out of business when it first opened. Why put a factory in a mall? Who, I asked, would pay top dollar to assemble their own teddy bear or other stuffed animal when you could buy perfectly good ones off the shelf? Well, that was before I had a daughter, and now I know the answer: lots of little girls!

During the party it was interesting to watch the girls picking our animals with the help of the friendly salespeople. (Note: The phrase "Would you like mc to stuff your monkey?" sounds, somehow, n; inappropriate.) As my wife pointed out, the animal-and-clothing combinations that the girls put together reflected their own personalities and styles.

The girls were very happy, but I couldn't help thinking that quite a few bluenoses would have disapproved. Customized bears (or monkeys!) that you put together yourself? An endless array of bear-pants, hear-glasses. bear-hats, bear-dresses, hear-briefcases, and even bear-roller skates to go with them? Who needs it? Rotten kids, spoiled rotten!

Except that actually they're rather nice girls, who with no prompting spent considerably less than the party budget allowed for, and who cooperated sweetly in picking things out and complimenting each others' choices. So as I was paying the bill (the cashier was an Albanian Kosovar refugee, who seems to have settled in rather well in that most inclusive and most American of institutions: the shopping mall), I had a Postrel moment: I realized why I was so thoroughly wrong about the prospects for Build-A-Bear.

Virginia Postrel has argued in her book, The Substance of Style, [6] that aesthetic values are becoming a major driver -- perhaps the major new driver -- of economic activity. It's easy to scoff at this because aesthetics seem divorced from function: an ugly car gets you where you're going just as quickly and reliably as a pretty one, an ugly coat keeps you just as warm as a handsome one, and an ugly house keeps the rain off just as well as a showplace.

Nonetheless, attractiveness matters. We all know that an ugly spouse can be just as faithful and loving as a gorgeous one -- even, if popular legend is to be believed, more so -- but we nonetheless tend to choose mates whose looks we like. To my daughter and her friends, it's natural to spend a lot of time thinking about what looks good. And, judging by the attention that my nephews pay to the subjects of their interests (automobiles, airplanes, and other vehicles, mostly), looks matter there too.

So does customization. What the folks at Build-A-Bear figured out, and what I missed entirely when I scoffed at their business plan, is that people don't just want things to look good. They want them to look good their way. That's what makes Build-A-Bear work.

Other stores have stuffed animals that are just as attractive, but the buyers don't feel that they are unique. So where will this lead? People talk about "customizing" outfits with accessories, but how long before on-the-spot manufacturing of clothing lets people design clothing themselves, or download designs from the Internet and produce truly one-of-a-kind outfits? People are already experimenting, and I suspect that a "Build-An-Outfit" will be coming soon to a mall near you.

I also suspect that it's just the beginning. (Design your own car? Why not?) But I also have another suspicion that verges on certainty: when it happens, people will complain. Just as people complained about the enforced conformity of old-style mass-production, people (often the same people) will complain about the multiplicity of choices offered by new technologies.

But then, complaining is an aesthetic style too, of a sort -- though it's one that, for better and worse, may not fit in as well at malls as it does elsewhere. And as malls develop (beyond comfy chairs and Build-A-Bear) that may become much more significant.

Reportedly, the new trend is toward a different kind of mall, the "lifestyle center," which fits the beyond-comfy-chairs description pretty well. Changes in shopping habits and an increased competitiveness due to the Internet and other local specialized boutiques have motivated retailers in shopping centers to be more imaginative in order to keep bringing in customers. [7] And people are specifically invoicing the "third place" point in pitching these facilities, as this account of one such venture makes clear:

His idea for Camano Commons, a 3.3-acre gathering place, is to try to capture that European spirit of places where private commerce and public leisure mix readily, said the project's marketing director, Theresa Metzger.

"In Paris, you have the sidewalk cafe. In England, you have the neighborhood pub," Metzger said....

"Americans are so unfamiliar with third places, so I always like to describe it this way: Remember the TV show, 'Cheers'? They didn't always get along, but when somebody was missing, they got concerned," Ericson said. [8]

I think we'll see more of that. In my local mall, blue-haired Goths with multiple piercings cluster in one area, while Dungeons & Dragons-playing teens stake out their territory in another spot. All the while, senior citizens and families stroll around them. It seems to me that the traditional downtown is being replaced by commercial spaces. And that has its ups, its downs, and its lessons.

The "up" is that Americans are getting the kind of safe, diverse, and communal public space that critics of suburbanization have long called for. Rather than being locked in their tract homes, watching television and not knowing their neighbors, Americans are increasingly spending their time in public spaces surrounded by all sorts of other people.

Another upside is that -- unlike the cumbersome white-elephant "downtown revitalization" projects envisioned by urban planners and funded by massive quantities of taxpayers' money -- these public spaces are market-driven and actually generate tax dollars rather than consume them. And, because it's market driven, the comfy-chair revolution can turn on a dime to meet consumer needs and interests.


The downside is that the traditional downtown has been replaced by corporate-controlled space. What's wrong with that? Well, in the traditional downtown, things like the First Amendment's guarantee of free speech apply. In malls, they generally don't. (One of my former students has written an interesting law review article on this subject.[9]) But that's where the people are, meaning that First Amendment guarantees of the right to protest downtown are increasingly meaningless when nobody goes downtown. Indeed, here in Knoxville the antiwar protests, such as they were, were held on the sidewalk in front of West Town Mall when the protest organizers realized that a weekend protest downtown would be the proverbial tree falling unheard in the forest. Malls oft-en have such offensive characteristics as omnipresent security cameras coupled with draconian bans on picture taking. It's not like Singapore, exactly, but it's not your old-fashioned downtown square either.

But there's a lesson too. One reason why people go to malls instead of downtown is that they feel safe. Part of this is physical safety. Though that's partly an illusion. Mall crime doesn't get reported much-all those advertisers make it easy to persuade local media to keep it quiet-but there's lots more of it than you'd think. Makes sense: criminals go where the money is, and a mugger would starve to death in most downtowns.

But more important than the desire for physical safety, I think, is the desire to go un-hassled by unpleasant people. Vagrants (relatively safe from prosecution in light of Supreme Court decisions), panhandlers, and accosters of pedestrians ranging from Bible-thumping street preachers to various political activists are free to express themselves in downtowns, thanks to the expansive first Amendment jurisprudence of the past half-century. But, except in a few states where the state constitution has been interpreted to treat malls as public space, they're barred from these spaces. And, in a curious coincidence, that's where people tend to go. (How do people really feel about this? I've observed that in the movie Airplane, the audience always cheers when the airport solicitors get beaten up.)

So what's the lesson? Free speech absolutists (and I'm pretty much one myself) may tell people that being hassled by loudmouths is part of democracy. And people may even agree -- but they'll still choose the mall over downtown if the hassle-factor gets very high. What that means, among other things, is that public-sector rules are always subject to private-sector competition. It also suggests that you can enact rules that promote free speech at the cost of people being hassled -- but if you go too far, people will vote with their feet by choosing a controlled environment with fewer hassles.

This sort of market-constrained approach to rights may trouble some people, though it's really just a public-private version of the sort of competition among states that federalists have always supported. Either way, it's a reality worth keeping in mind when planning rules and regulations for public and quasi-public spaces -- especially since we are likely to see the latter increase as a result of the comfy-chair revolution.

The upside, though, is that the traditional lonely orator, trying to get his (it was almost always "his") message across in the public square, isn't so important as a symbol of free speech anymore. The Supreme Court once wrote, "The liberty of the press is the right of the lonely pamphleteer who uses carbon paper or a mimeograph just as much as of the large metropolitan publisher who uses the latest photocomposition methods." [10] But more recently, the Court noted, "Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer." [11]

But, actually, technology has made it possible for individuals to become not merely pamphleteers, but vital sources of news and opinion that rival large metropolitan publishers in audience and influence. Since these independent sources are both less expensive and usually less annoying, perhaps First Amendment doctrine will take the difference into account.

Charles Black once wrote of "the plight of the captive auditor," who is subjected to messages that "he cannot choose but hear." [12] Limits to technology may have required us to overlook the captive auditor's plight in the past in the name of free speech -- causing many people to vote with their feet in favor of controlled private space. But newer technologies may justify a different approach today: the First Amendment often requires the government to pursue the least restrictive means in regulating speech. Perhaps there should be at least an implicit requirement that speakers use the least annoying means of speaking too, or at least abide by limits when choosing the most annoying. This doesn't strike me as a bad thing. While the Internet makes publishing -- and hence a free press -- easier and cheaper, technologies like The Cloud and FriendFinder should make free speech, and public orations, easier and cheaper too, without the need to annoy. They'd better, anyway, because people's willingness to put up with annoyance is limited, while people's choices are, thanks to technology and the market, growing all the time.

What makes this issue difficult is that the tidy division between public and private spaces that we've taken for granted in recent years is breaking down. Traditional public spaces, like town squares, usually lack amenities. Even public restrooms are often hard to find. Private-public spaces like bookstores and coffeehouses have amenities and are open to everyone; but people tend to develop a proprietary interest in the places they frequent most. (No surprise to anyone who has ever heard a Londoner refer to "my pub.") Likewise, as people develop more control over their environments, they tend to have less tolerance for things that threaten that control. Americans tolerate TV commercials but hate popup ads, accept junk mail but despise spam, and, I suspect, will respond even less favorably to interruptions by strangers in public places once they become accustomed to meeting mostly with people they know or have something in common with. The "third place" may be a partial remedy to that, but as with the pub, we're likely to see people who don't fit in get a somewhat chillier reception. Determining the boundaries for acceptable public conduct, especially in private-public places, may prove a challenge in the future.

Working it out won't be easy, but then all revolutions have their difficulties.
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 9:50 pm


The popular media are obsessed with news about how technologies like Napster, Grokster, and BitTorrent are making life hard for musicians. But what they rarely report is how technology affecting everything from the latest equipment to file swapping and podcasting is also empowering ordinary people to create, not simply to copy.

That's certainly been my experience. Back in college I had a band. It wasn't a great band, but it was good enough to get gigs in all the local clubs. We recorded a couple of demo tapes, but to do that we had to rent time in a studio, pay an engineer, and then produce copies on cassettes. If I remember correctly, a demo cost about five hundred bucks, which was real money back in the Reagan years.

The studio had a big Tascam mixing board, which cost thousands of dollars, and a big Tascam reel-to-reel deck that tracks were recorded on. It cost thousands of dollars too. The recording heads had to be cleaned, demagnetized, and aligned regularly. The tapes were very expensive -- a hundred bucks apiece for the good ones.


Things are different today. You can buy a "studio-on-a-shelf" (Tascam's trademark for its compact all-in-one recording devices) with infinitely more capability, and it costs you about a thousand bucks. And where Terry Hill's Camel Studios, the studio that we used, had eight tracks, the do-it-yourself model will have sixteen or twenty-four. I had to look hard to find a do-it-yourself-recording device limited only to what Camel Studios offered. The Musician's Friend website does list a Fostex 8-track all-in-one studio for $399, but it still does things that Camel Studios couldn't, like emulate different expensive microphones and guitar amplifiers using built-in computer models. Unlike Camel Studios, you can't get Terry Hill to sit in on guitars or offer unsolicited (and usually good) production advice, but on the other hand, the Fostex doesn't produce endless clouds of cigarette smoke either.

And the Fostex lets you burn CDs and transfer .wav files to a computer so that you can convert them to MP3s and upload them to the Internet. Despite his technical sophistication (Terry Hill was buddies with guitar gadget geniuses like Brian Eno, Robert Fripp, and Alvin Lee), mentioning any of those capabilities would probably have elicited a "Huh?" from Terry.

I've taken advantage of all of these kinds of capabilities myself With some friends, I have a genuine studio with an eight-track digital tape deck, an impressive mixing console, and racks of effects boxes. I use the studio sometimes, but most of my recording is done at my house, on a computer, using an interface box by Echo audio; software like Cubase, Acid, and Audition; and various pieces of software that emulate actual instruments -- programs that take the place of the old effect boxes with wires and glowing lights. My ReBirth RB-338, for example, emulates the old Roland TB-303 synthesizer that produces the sounds associated with classic techno -- and it sounds cleaner than the real thing, which was originally designed as a cheesy accompaniment to lounge bands, not a studio instrument. My Native Instruments Pro-52 emulates a Prophet 5, a classic 70s-'80s synthesizer. And Cubase comes with all sorts of virtual instruments, including a surprisingly good electric guitar. Propellerhead's Reason is an entire studio and collection of virtual instruments aimed at producing trance and hiphop; it costs about $400. Many of these software emulations do a surprisingly good job of capturing the sound and feel of the original instruments, often even offering graphic recreations of bakelite control knobs and analog meters on the computer screen.

These things let you make music easily and cheaply -- although you still have to be able to make it sound good. When my wife made a documentary recently, I did the soundtrack entirely on the computer. I licensed a handful of loops (about ten seconds each) from Brian Transeau, a musician and sound designer I like a lot (he did the soundtrack to Monster), assembled them with some stuff I recorded myself, and created a soundtrack in a couple of weeks. Doing it the old-fashioned way would have cost thousands of dollars.

What's more, the new music often sounds better, if it's done right. I've always been a big fan of vintage equipment -- my favorite keyboard is a Roland Juno -- but the fact is that music recorded on computers often sounds cleaner and richer than music recorded on tape in studios. And software comes to the rescue there too -- in my case, in a way that offers some broader lessons.

Polish software engineers are making me very happy. I know, I know: this sounds like some son of punch line. But it's not, and here's why.

My brother and I have a small record label. It's not a nonprofit, though it might as well be, but we have fun, and we're able to release things that a bigger record company -- one whose shareholders actually cared about making money -- might not touch, from Nebraska tractor-punk to native Ugandan music.

I'm the main sound engineer, and one of my tasks is to "master" everything. That means performing a variety of transformations to the finished mixes before they're turned into CDs: adding compression, adjusting the stereo image, normalizing levels, applying frequency equalization, etc. Mastering is more of an art than a science. When it's done right, everything on the song sounds just like it did before, only more so: ''As if somebody cleaned the wax out of your ears" is a standard definition.

Nowadays there's even more to mastering. People object to what they judge to be the cold and harsh sound of digital recording. But the coldness isn't really caused by unpleasant digital distortion; rather, it's just the opposite: analog tape recording actually distorts the sound in ways that people like. Recording to tape adds even-numbered harmonics, smoothly rolls off the extreme highs, and because it doesn't respond linearly to increased volume it produces what's called "tape compression." All these distortions result in a feeling of warmth, fullness, and general ear-pleasing goodness. The harshness that people blame on digital technology actually comes from the absence of pleasant artifacts, not from any new quality that the digital recording process injects.

When mastering was done with rooms of equipment driven by racks of glowing vacuum tubes, printing to half-inch-wide magnetic tape, this wasn't an issue. But now that we master on computers it is, and all sorts of software has appeared to generate the kind of warmth previously supplied by huge racks of vintage gear.

My favorite software -- one of the many good programs of this kind -- is produced by a Polish company called PSP Audioware. The sound is great, the software is very intuitive to use, and it's dirt cheap.

The cheapness comes from the way PSP does business: it's two guys, in Poland, who write the software themselves and distribute it via downloads from their website. They also provide tech support themselves (at least they've answered the few questions I've had), and since they're also the guys who wrote the software, they do a better job providing support than most computer users get from behemoth companies.


This is a mode of doing business that was impossible until recently, and it's one that's wonderfully suited to countries like Poland (and India) that have lots of smart people but suffer from mediocre infrastructure and a shortage of investment capital. I'm sure that shipping the software, on disks, from Poland to the rest of the world could be a much bigger headache, and produce far fewer sales, but Internet downloads solve these problems.

What's news about this is that it isn't news. Ten years ago, the notion of quality software from Poland would have been a joke to most people, and the idea of selling it to consumers over the Internet would have seemed equally far-fetched. Yet now such ventures are commonplace. In the audio software field alone, there are literally dozens of companies like this -- small shops, selling excellent software via download at very attractive prices, often from places not generally associated with computer leadership.

Twenty years ago, the guys at PSP would have been miserable drones in some horribly run state software enterprise, if they were able to work in software at all. Ten years ago, they would have been wondering how to sell their skills to the West without emigrating. Now they're earning hard currency from buyers around the world without having to manufacture or ship any tangible goods at all.

Remember this when people tell you that the whole Internet thing was just a bubble. But the impact doesn't stop there. With that software, I mastered some recordings by a Ugandan band called Afrigo. My brother is an African historian and travels to Africa regularly. He found out about Afrigo, the most popular band in Uganda, and offered to help them get some broader exposure.

Internet access was lousy in Kampala then, so they mailed us some CDs (they record their music on one of those studio-on-the-shelf setups). I mastered it using the PSP software and uploaded their songs to the website. Back then, before it was destroyed by music-industry lawsuits, was the place to go for interesting independent music. (There's still an site, but it's nothing like what used to exist.) What's more, bands got paid based on how much their songs were downloaded. Afrigo's music turned out to be pretty popular, and it earned a few hundred dollars a month. That's not a lot of money to an American band, but it's a pretty good chunk of change in Uganda.

What's more, their exposure on the site got their music noticed elsewhere. If had lasted longer, I think it would have helped more African bands, bur the Internet will do the job anyway, given time. Perhaps the world's greatest reservoir of wasted human talent -- that is, ability that was never developed and recognized to the degree it deserves -- is Africa. And, because of that, Africa may have the most to gain from the communications revolution.

Africa has been exporting music to the world for centuries, of course. Almost every musical form of the past century -- from gospel, to ragtime, to blues, to jazz, to rock and roll, to reggae, to techno -- has its roots in African musical styles. And African art has influenced Western artists from Picasso to Modigliani to Renee Stout.

The world has gotten a lot from Africa. Africa, however, has gotten much less from the world. But that may change now that Africans are working in media that can make money with the Internet and other communications technologies, making it easier to get work out and other people's money in. It's not just Afrigo. Other bands like Hay Izy from Madagascar (playing a mixture of tribal vocals and hip hop), Ras Shaheema from Namibia (reggae), or Co. Operative from Zimbabwe are taking advantage of new technologies. We get the benefit of the diversity of African music today while their chance of financial viability greatly increases.

It's not just music. Awhile back I watched a Nigerian movie called To Rise Again: an enterprising mixture of Scarface, Sliding Doors, and Its a Wonderful Life. Nigeria's film industry is booming and is now a regional threat to India's third-world film capital of Bollywood.

To Rise Again is a well-done and interesting picture, with a budget probably in the neighborhood of $20,000. Africans, Nigerian expatriates around the world, and American film buffs in the States have all been able to participate in its success. Thanks to DVD and Video CD technology, distributing a movie is nothing like the challenge it was a couple of decades ago. And filmmaking -- thanks to digital video cameras and PC-based editing -- is not nearly as expensive as it was even a few years ago.

Given that Africans have as much talent and ambition as you'll find anywhere else, these lowered barriers are likely to mean that African musicians, actors, producers, and directors will enter the global market at a growing rate. And given that, historically, African culture has been very intriguing, even appealing, to the world at large, the growth of inexpensive communications technologies is likely to mean a greater Africanization of world culture in general.

African culture has taken the world by storm even in the face of drastic economic and transportation barriers. Imagine what it may accomplish now that those barriers are falling. While it may be a long time, if ever, before Africa becomes an entertainment center to rival, say, California, its share of the world market seems likely to grow dramatically, while California's influence shrinks.

The consequences are likely to be interesting. Antiglobalization types accuse American culture of spreading Western ideas that corrupt "traditional" cultures. Yet, if you listen to African songs, you find more religious influence than you find on the American charts, including many Christian-influenced songs.

Likewise, the Nigerian film industry, based in Christian southern Nigeria, is heavily Christian-influenced, producing works that make the "Left Behind" films look downright secular by comparison. Its continental rival, the Ghanaian film industry, has a similar orientation, with a heavy inclination toward Pentecostalism.

If these industries grow, the result could well be a far more Christianized Third World. What will the antiglobalization folks say if the growth of Third World entertainment industries leads to a far more conservative media climate around the world?

Regardless, new technologies have created jobs and prospects in Africa that were almost unimaginable back in 1985. Which raises a question: While the rock stars who gave of their time to perform at Live Aid and Live 8 received much public praise for their selflessness, what of the engineers and scientists whose work made these new technologies possible? Will they get similar praise? Probably not, though reportedly Bill Gates was cheered like a rock star at the Live 8 concerts. [1]

APPLE STARTED IN A GARAGE, TOO is gone, but its successor, in many ways, is a company called GarageBand, like, provides a website that hosts music for bands and allows bands and their fans to connect in various ways. GarageBand draws about 150,000 bands -- a pretty large fraction of the million or so bands in the United States -- especially when you allow for the fact that GarageBand's artists are all writing and recording original music. These aren't the bands that play "Proud Mary" at weddings.

I spoke with GarageBand's CEO, Ali Partovi, about where all this is going. [2] Partovi is an open-faced entrepreneur whose previous venture, LinkExchange Inc., wound up being bought by Microsoft for $265 million. (It's still around as bCentral.) Partovi spoke so rapidly that I had to ask him to slow down, but it seemed like the excitement of an enthusiast, not the fast talk of a salesman. And there's a lot to be excited about.

The key to GarageBand's approach is filtering, but with a human touch. Lots of people listen to the music and review it. Every band gets a chance: musicians upload music, then review each other's work. You don't know much about the artist until after you've done your review; and unlike certain MTV stars, you can't make it on looks alone. A song can rise on the charts very rapidly if people like it; or it can languish for a long time, with no help from big-label payola, if they don't.

Partovi says that this appeals to two things that motivate musicians. "It's both money and a desire to be heard. An increasing number just want to be recognized. There's a mixture of both on our site." The most serious people, he says, usually want to make money: "Music production is time consuming, even with the best technology, so if you're serious you want a payoff. On the other hand, there are lots of people just below the top level who just want to have fans."

GarageBand has already had some success in moving its stars off the Internet and into the wider world. Geoff Byrd's pop-rock songs, which topped the GarageBand charts, were getting enough radio airplay to drive them, at the time of my Partovi interview, to the top 40 on Billboard's radio charts. Another artist, Jenna Drey, was number 23 on the Billboard dance chart. Both made it without the big promotional investments that record companies usually make to get artists on the charts.

What's the secret? Partovi says that it's simple: people tend to like what other people like. The GarageBand songs are pretested on a lot more people than the songs of unknown artists who are signed by record labels. "The role that GarageBand is increasingly playing is as a filter that can predict radio success. That's a very important role. Instead of 'invest first,' the Internet allows us to 'test first' before a big investment. That changes things for both artists and labels."

This probably doesn't mean doomsday for record labels, but it certainly does change things. Partovi thinks that record companies will have less to offer musicians, and consumers, in the future. "Twenty years ago there was no alternative [to the record-label route], because production and distribution were so capital-intensive. Those aren't anymore, but promotion has become much more capital-intensive because there's so much more music out there. Major labels have been reduced to providing the capital for promotion, and the Internet will cut into that too." Record labels' days of holding the whip hand are over. Contrary to what the record industry people thought, Napster wasn't the threat; it's outfits like that provide rival services to musicians and listeners alike that pose a real problem for their industry.

The same may be true for radio. Partovi is very enthusiastic about podcasting, which lets pretty much anyone get into the Internet "radio" business by recording broadcasts that are automatically downloaded and copied onto people's iPods and other portable music players. Podcasters, he says, are becoming a new route for people to discover music they like. "It's the cultural trend of amateur DJs discovering new music -- performing the role that radio DJs should have performed for the last twenty years but haven't. A regular FM DJ could get fired for playing a song by a new artist. Podcasting unlocks that."

Interestingly, he thinks that DJs may thrive in this new atmosphere: "DJs play an important role. Consumers want new music, but most don't want to take the trouble to find it on their own. They want someone else to do the filtering, and the human touch is key."

What's more, podcasting is a better promotional tool than radio in some ways. If you hear a song you like on the radio, you have to figure out what it is, then go find out about the artist. With podcasting it's different: "Once you discover an artist you like via a podcast, the technology makes it easy to find out more about the artist. You can find a band via a DJ's podcast, follow a link to subscribe to the band's podcast, and then the band doesn't need a middleman to get in touch with you. You'll know when they have something new."

That's not only important for the little guy, but for established artists like Paul McCartney who are no longer darlings of the radio. All musicians benefit from a way to reach their fans that doesn't depend on the radio business; the Internet provides one. And it's not just GarageBand or podcasting getting in on the act -- another site,, has sold 1.7 million CDs and made over $16 million for its artists.

GarageBand offers a lot of podcasting tools on its website, allowing bands to communicate with their fans -- and allowing anyone else who wants to set up a podcast, musical or otherwise, to do so. The Wall Street Journal's technology columnist, Walt Mossberg, tried it and found it easier than most other systems for creating podcasts; there's even a feature that lets you create a podcast by telephone. Still, he concluded (and I agree) that creating podcasts remains a lot harder than creating text-based blog entries. [3] That's likely to change soon, though.

On the receiving side, podcasts have gotten a lot more user-friendly. Apple has upgraded its iTunes to let users subscribe to podcasts via a point-and-click interface, so that anyone who owns an iPod will find it easy to subscribe. Once that's done, iTunes will check for new podcasts from that source and then download them every time a user plugs in the iPod. And GarageBand is thinking of creating a podcasting site that specializes in nonmusical subjects, like interviews and news reporting.


One of the biggest things holding podcasting back -- and protecting commercial radio -- is the copyright barrier. Radio stations operate under so-called "blanket licenses." By paying an annual fee to clearinghouse organizations like ASCAP or BMI, they can play songs without having to get permission for each one. The clearinghouses then divide the money according to a formula and forward payments to artists. (Nothing wrong with that; I'm an ASCAP member myself and occasionally get a check when somebody uses one of my songs.)

On the Internet, however, things are much harder. In a recent column in Wired magazine, Larry Lessig reports on how copyright concerns made it effectively impossible for a nonprofit he works with to put a recording of "Happy Birthday" (yes, it's still under copyright and will be until 2030) on the Web. At first, they thought they could purchase a "mechanical license" (which operates under a similar sort of clearinghouse arrangement). But then the lawyers decided that they needed a separate permission from Warner/Chappell Music, which manages the rights to "Happy Birthday." Warner first agreed to grant them a license for $800, but then changed its mind. By that time, the lawyers were worried that people would take Lessig's performance and remix it, making him an accessory to copyright infringement. Lessig concludes: "The existing system is just workfare for lawyers." [4]

Yes, it is. And it's likely that commercial broadcasters -- who are seeing their audiences shrink because, not to put too fine a point on it, their programming stinks -- will oppose any legal changes that might eliminate this sort of barrier. Anything that makes life easier for podcasters, and Web music in general, is likely to make things worse for them. At this point, their comparative advantage isn't technological or creative: it's the advantage conferred by a friendlier legal environment.

Of course, radio stations are relying on such legal protection, even from their old-media competitors, in the form of low-power radio. Because as things stand now, the Federal Communications Commission is a major barrier to free speech, and the only justification for its position has been exploded.

How big a barrier? This big, where radio is concerned:

Freedom to create means more than that: not just the right to choose among 500 TV stations instead of three, but fewer barriers to setting up a station of your own; not just greater ease in joining the officially licensed elite, but the right to operate outside it. Like the freedom to choose, the freedom to create is being withheld by an alliance of policymakers and professionals. The technical cost of starting a station has been within most Americans' reach for years. The legal cost, however, is much higher: thousands of dollars to purchase an existing license, thousands more to cross various regulatory hurdles. With very few exceptions, the FCC won't even issue licenses to noncommercial stations of less than 100 watts. Class A commercial stations require at least 6,000 watts of power. [5]

Some years ago, the FCC decided to license low-power radio as a separate category. Powerful broadcasting interests -- including, ironically, National Public Radio -- responded to the threat of competition by lobbying successfully for legislation that made the licensing of low-power stations far more difficult. In particular, the spacing between stations that the bill required made the creation of low-power stations in urban areas very difficult. One of the requirements of the bill, however, was a technical study on interference, with a provision for removing the spacing requirement if the study showed that interference wouldn't be a problem.

Now the study, done by the MITRE Corporation, has been released with little fanfare and only after the threat of a Freedom of Information Act demand. Such reluctance suggests that the FCC didn't much want to hear the study's results. And that may be because the study finds that low-power FM radio doesn't pose a significant interference problem. Here's an excerpt from the study:

Based on the measurements and analysis reported herein, existing third-adjacent channel distance restrictions should be waived to allow LPFM operation at locations that meet all other FCC requirements [after four small revisions].... Perceptible interference caused during the tests by temporary LPFM stations operating on third-adjacent channels occurred too seldom ... to warrant the additional expense that those follow-on activities would entail. [6]

Here's the question a lot of people have been raising lately: Is the FCC really devoted to efficient and diverse communications, or is it just a bureaucratic flunky for the big broadcasting companies? The FCC's response to the MITRE report -- which was buried in the comment section of the FCC's website and not publicly announced -- suggests that the cynics may be right. But it's not too late for the FCC to prove them wrong.

Former FCC Chair Michael Powell's justification for relaxing the rules on broadcast media concentration was that new media -- the Internet, satellite broadcasting, etc. -- would ensure that concentration in commercial broadcasting would be offset by new sources of information. As James Plummer wrote awhile back, ending the suppression of microradio is a better way of promoting diversity than more regulation. [7] If the FCC really believes in broadcast diversity, then now that the bogus interference concerns raised by NPR and the National Association of Broadcasters have turned out to be, well, bogus, it should endorse the growth of low-power FM stations. Sure, Clear Channel and NPR don't want to face the competition. But protecting fat cats from competition isn't what the FCC is all about, is it?

Or maybe it is. But as more and more people get access to the tools of creation and distribution, it's more likely that politicians will recognize that there are more voters who want to create than voters who want to stand in their way. The FCC has made some moves of late to make low-power FM stations easier to establish, but it is still treating them as second- or third-class citizens: unprotected from interference and often overridden by "translators" used to extend the range of big commercial and public stations. Legislation before Congress might change that, but it's strongly resisted by broadcasters, as you might expect.

Anxious to hold on to its piece of the pie, Big Media encourages restrictions that make the field -- movies, music, broadcasting, whatever -- less attractive to consumers. [8] So customers leave the field entirely or substitute goods that are less regulated. Unable to get permission to use commercial music on the Internet, people often turn to independent bands that license their music freely. Radio gets duller and more boring, so people turn to podcasts. Movies are more restricted so people turn to videogames or independent films. As Princess Leia said to Grand Moff Tarkin: "The more you tighten your grip, the more star systems will trickle through your fingers." That's a lesson -- taken from one of its own products, no less -- that the entertainment industry would do well to learn.
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 9:56 pm


Unfortunately, technology empowers the bad people as well as the good. Take terrorists, for example. Modern explosives, computers, and communications magnify the damage that an individual or a small group can do. On the other hand, technology also makes the rest of us better equipped to face such threats. Dealing with both sides of that equation will be one of the big challenges of the twenty-first century.

Right now, we're not dealing with it especially well. Governments want to keep this sort of power to themselves, and they're not very good at taking small-scale approaches to, well, anything. For governments, bigger is almost always better.

But, in fact, responding to attacks and disasters is something that individuals and small groups may be better situated to deal with than governments. Certainly the amateurs on the scene have one big advantage that the government usually lacks: they're on the scene. In all sorts of circumstances and capacities.


It is no secret that Al Qaeda and other Islamic terror groups make extensive use of the Web. Some websites provide coded messages, in the same way radio stations used to broadcast coded messages for spies in enemy territory. Others playa role in recruiting, disseminating propaganda, and soliciting donations. Some may serve all of these functions.

No doubt various official U.S. government agencies are looking at these sites in order to gather intelligence and identify enemies. But they're not alone. In fact, a surprising number of ordinary citizens have gotten involved as well.

Sometimes, the stings are quite elaborate. For example, the pseudonymous hacker "Johnathan Galt" appears to have set up a phony pro-terrorism site that solicited support and donations from those sympathetic to Islamic terror. After operating for several months (with, apparently, the assistance of Islamist bin-Laden sympathizers who thought it was genuine), the site became a new and improved anti-Islamic terror site sporting the legend, "We've changed our mind: Jihad is crap!" No doubt Mr. Galt also harvested a great deal of information useful to the authorities, including IP addresses, cookie-tracking information, and, of course, identity information via the PayPal donations he accepted. [1]

Similarly, Internet entrepreneur "Jon David," who runs a number of Internet porn sites as his day job, has made a hobby out of hijacking pro-terror websites. Most recently he scored a coup by successfully taking over the AI Qaeda website. [2] Visitors were redirected to a mirror page operated by David, from which he harvested 27,000 IP addresses per day, along with other information he has shared with the FBI. (No big surprise in one discovery: 90 percent of his visitors came from Saudi Arabia.)

Not as James-Bondian but still pretty important, webloggers like Charles Johnson ask their readers to look for pages containing support for terrorism, publicize the results, and attempt to bring pressure on the ISPs to shut the sites down. And other folks have jumped in with ideas for disinformation and pranks that will spread confusion among jihadists at very low cost.

At the very least, website monitoring helps keep people informed of what's going on, and website hacking means that terrorists and terrorist wannabes have to constantly worry about whether their Web operations have been compromised. Both kinds of actions serve to make life much tougher for terrorists and their supporters.

It's hard to know how these actions compare to whatever is being done by government agencies. Possibly, far more sophisticated operations are underway by skilled and well-equipped government hackers. On the other hand, Jon David's experience suggests otherwise. When David approached the FBI to tell them that he had captured Al Qaeda's website and that he was eager to cooperate, the FBI's response was glacial:

It literally took me five days to reach anyone in the FBI that had an even elementary grasp of the Internet. By that time, the hostiles realized the site I had up was a decoy and then advised everyone away from it. I still gave the FBI all the log information and link information to the hostile boards and whatnot, but it's far from what could have potentially been done if they would have acted more quickly. [3]

The good news is that the Bush administration seems to be figuring out that creative individuals may be able to complement law enforcement's more traditional approach. Richard Clarke, when he was White House computer security adviser, publicly encouraged white-hat hacking and offered to put the administration's weight behind any legislative changes needed to protect good-guy hackers from prosecution or litigation. That's a good start, especially in light of the software industry's tendency to punish those who point out flaws in fear of bad publicity. But Clarke was mostly concerned with probing friendly systems for weaknesses. Clarke's long gone now, and I'm not sure that his successor is as supportive. What we really need is a counterterrorist program that harnesses the energy and innovation of good-guy hackers. Terrorism is a decentralized, fast-moving threat, which means that a decentralized, fast-moving response makes sense. Bureaucracies aren't good at that, but ordinary Americans are.

Electronic privateering, anyone? It's an idea whose time may have come.


But, of course, the role of involved citizens, empowered by technology, goes well beyond that. In fact, we saw it on 9/11.

Albert Einstein once said that the most powerful force in the universe is compound interest. Arguably so. But I think that the most powerful force in the human universe is the learning curve.

The war on terrorism provides good examples of this phenomenon on both sides. Before September 11, the terrorists were the ones with a learning curve. Although there is plenty of evidence that the Al Qaeda crowd isn't especially bright, over the years they demonstrated the salutary (for them) qualities of persistence and willingness to learn from mistakes. When truck bombing the World Trade Center failed, they started looking at airplanes. When initial efforts to hijack airplanes failed, they changed their approach.

The aviation-security establishment, meanwhile, was much less adaptable. It concentrated on stopping 1970s style skyjackings, where the chief goal was publicity (and perhaps money) rather than murder. Later, efforts began to turn toward blocking Lockerbie-style bomb smuggling. And because the security system was blocking such efforts with a fair degree of efficiency, it didn't change its approach even when confronted with indications that the terrorists were changing theirs. Bureaucracies are supposed to be about sharing information, but information is power in bureaucracies, and people are not all that keen about sharing power.

The result was that, on September 11, the terrorists held all the cards. They carried only items that did not violate carry-on rules. They avoided scrutiny designed to thwart bomb-smugglers -- scrutiny based on the assumption that terrorists wouldn't want to die with their victims. They took advantage of a stay-passive philosophy that urged (indeed, required as a matter of policy) cooperation rather than confrontation with hijackers.

But no sooner did the first plane strike the World Trade Center than the hijackers had to confront someone with a swifter learning curve. As Brad Todd noted in a terrific column written just a few days later, American civilians, using items of civilian technology like cell phones and twenty-four-hour news channels, changed tactics and defeated the hijackers aboard United Airlines' Flight 93. These civilians overcame years of patient planning in less than two hours.

Just 109 minutes after a new form of terrorism -- the most deadly yet invented -- came into use, it was rendered, if not obsolete, at least decidedly less effective.

Deconstructed, unengineered, thwarted, and put into the dust bin of history. By Americans. In 109 minutes.

And in retrospect, they did it in the most American of ways. They used a credit card to rent a fancy cell phone to get information just minutes old, courtesy of the ubiquitous twenty-four-hour news phenomenon. Then they took a vote. When the vote called for sacrifice to protect country and others, there apparently wasn't a shortage of volunteers. Their action was swift. It was decisive. And it was effective. [4]

No one has successfully hijacked a Western civilian airliner since -- and, as "shoe bomber" Richard Reid learned, those terrorists who threaten civilian airliners now tend to emerge rather the worse for wear. Against bureaucracies, terrorists had the learning-curve advantage. Against civilians, they did not.

No surprise there. American civilians, perhaps even moreso than counterparts in Europe, Japan, and the rest of the industrialized world, are used to making rapid changes based on new information. Accustomed to a steep learning curve in business and in life, we should be able to out-adapt those who, after all, are ultimately committed to returning the world to a simulacrum of the twelfth century.

There's a lesson here. Societies that encourage open communication, quick thinking, decentralization, and broad dispersal of skills -- along with a sense of individual responsibility -- have an enormous structural advantage as opposed to societies that don't, an advantage that increases in a world of high technology and unconventional war. But tyrants and fanatics of whatever stripe cannot afford to encourage those traits in their citizens if they want to remain in power. The message that this should send to our adversaries is one they should find disheartening: The only way you're likely to beat us is by becoming like us -- at which point, more than likely, you won't want to beat us anyway.

The Americans acting aboard Flight 93 were not an aberration. In fact, Americans responded to the 9/11 attacks in similar fashion elsewhere.

One barely reported story from September 11 illustrates this better than any other. The improvised navy evacuated roughly a million people by boat from Lower Manhattan, in an operation that some have called an American Dunkirk. Ferries, commercial boats, and pleasure craft spontaneously assembled to carry people away from the scene of the attack and to return with needed supplies:

People at Ground Zero, the Manhattan Waterfront, nearby New Jersey, Staten Island and Brooklyn waterfronts, and crews on the numerous vessels repeatedly used the phrases "just amazing," "everyone cooperated," and "just doing what it took" to describe maritime community responses. Individuals stepped up and took charge of specific functions, and captains and crews from other companies took their direction .... Private maritime operators kept their vessels onsite and available until Friday, Day Four, when federal authorities took over. [5]

"Day Four, when federal authorities took over." There's a lesson in that phrase, isn't there? This wasn't just an evacuation: it was a whole alternative logistic system, improvised on the fly by people who didn't work for the government. Fuel, water, and food were brought in; when there were problems moving big pieces of steel at the site, the boats brought structural ironworkers from New Jersey along with boots, oxygen, and acetylene cylinders, and whatever else was needed. This effort got some coverage at the time but has largely been forgotten in the aftermath since ad hoc groups don't have PR agents to keep their deeds in the public eye. Still, it was one of the most amazing feats of human self-organization ever, and it deserves more attention than it got.

Of course, many of the players in the New York evacuation and supply effort already possessed the technical skills that they needed -- it was just a question of applying them to the job at hand. Such might not be the case among a group of ordinary citizens at the scene of another disaster.


But things don't have to be that way. With a modicum of effort, it might well be possible to ensure that people at the scenes of disasters are prepared and possess the necessary skills for quick action on their own. How? By training them now.

Both the prevention of and the response to terrorism might be handled, at least in part, on a dispersed-among-the-citizenry basis. Prevention could be done by training volunteers to watch for suspicious indications that might warn of terrorism, and perhaps even inform certain select (but large) groups of intelligence data. The September 11 hijackers and D.C. shooter John Muhammad displayed lots of warning signs. The problem is that we were not ready to read those signs. [6]

Citizens could do much more in response to terrorism. Many have suggested encouraging people who are licensed to carry guns (an early technology for empowering individuals) to do so. After all, it was armed individuals working for El Al, rather than a law enforcement agency, who stopped Mohammed Hadayet's Los Angeles International Airport shooting spree almost as soon as it started. Armed citizens, especially if trained in what to look for, could be a very valuable line of defense against terrorism. In almost every instance of terrorism, the true first responders will be the people already on the scene. And, as Flight 93's passengers reminded us, that response can be decisive.

In addition, people trained in first aid (especially the specific skills likely to be useful in the aftermath of a terrorist attack), in recognizing the signs of chemical or biological attack, and in various other disaster-recovery skills could contribute a lot. Even in the case of such relatively "mundane" events as truck bombings and shooting sprees, individuals on the scene will have to wait crucial minutes before aid even begins to arrive.

People should also be encouraged to carry cameras, or video cameras, and use them in the immediate moments after an attack to gather potentially valuable data. Would people remember to use them? Probably. They often take video of disasters anyway (there's something about a viewfinder that tends to steady the nerves); it wouldn't take much to get people to do that.

In the case of the D.C. sniper attacks, even a massive law enforcement presence couldn't prevent terror attacks it knew were about to happen. But an informed and prepared citizenry -- the likes of which stopped "shoe bomber" Richard Reid, helped stop Mohammad Hadayet, kept Flight 93 from smashing into the Capitol, and finally caught D.C. snipers Muhammad and Malvo -- can be everywhere. It already is.

After repeatedly slipping through the fingers of law enforcement, John Muhammad and Lee Malvo were caught because leaked information about the suspects' automobile and license number was picked up by members of the public, one of whom spotted the car within hours and alerted the authorities. He even went so far as to block the exit from the rest area with his own vehicle to make sure they didn't escape. "You can deputize a nation," said one news official after the fact.

With proper information, the public can act against terrorists -- often, as we found on September 11, faster and more effectively than the authorities. The key, as blogger Jim Henley noted, is to "make us a pack, not a herd." [7]

The problem is that this goes against the very grain of intelligence agencies, law enforcement agencies, and the rest of the bureaucratic infrastructure. Within bureaucracies in general -- and doubly within intelligence and law enforcement bureaucracies -- information is power, and power isn't something you want to share. If you deputize a nation, doesn't that make the official deputies feel just a little bit less special?

The problem with this mindset is that it's all about bureaucratic turf, and not about getting the job done. Otherwise we'd have learned the lesson long ago. As Canadian journalist Colby Cosh remarked:

I'd have thought the Unabomber case would have taught police, I don't know, everywhere that it is better to be liberal than stingy in releasing information to the public. Remember the Unabomber -- the serial killer who was caught because his prose style was recognized? Yeah, that guy. If Charles Moose and his merry men had actually succeeded in sitting on the information they wanted sat upon, Muhammad and Malvo might have been popping another D.C.-area shopper's head like a grape while you read this. Keep this in mind as you hear their police work praised in the days to follow. [8]

That's a bit harsh, but the point is dear. There are good reasons police might want to keep some kinds of information confidential -- they need details that will let them screen out calls from nutballs other than the real killer (though that didn't work very well in the D.C. sniper case), and they don't want to create an unnecessary panic or provoke an orgy of finger-pointing and suspicion. These are actions based on legitimate concerns, but they can actually facilitate crime if overdone. And police are overdoing them.

It seems pretty dear that the authorities, overall, view the citizenry as a herd, not as a pack. They see ordinary people as sheep, with themselves in the role of shepherd. Without dose supervision, they assume, people will erupt into mob violence, or scatter in fear.

The evidence, however, doesn't support this assessment. As sociologist Kathleen Tierney writes, contrary to what pop portrayals of disaster might have predicted, the response of ordinary New Yorkers to the 9/11 attacks was "adaptive and effective":

Beginning when the first plane struck, as the disaster literature would predict, the initial response was dominated by prosocial and adaptive behavior. The rapid, orderly, and effective evacuation of the immediate impact area -- a response that was initiated and managed largely by evacuees themselves, with a virtual absence of panic -- saved numerous lives. Assisted by emergency workers, occupants of the World Trade Center and people in the surrounding area helped one another to safety, even at great risk to themselves. In contrast with popular culture and media images that depict evacuations as involving highly competitive behavior, the evacuation process had much in common with those that occur in most major emergencies. Social bonds remained intact, and evacuees were supportive of one another even under extremely high-threat conditions. [9]

What's more, such responses are typical, even though they often infuriate outsiders. For the government it's upsetting, because people aren't asking it what to do. For the media it's frustrating, because there's no one in charge to interview. But we shouldn't assume that these frustrations have anything to do with effectiveness. As Tierney notes, people improvising on the scene often look disorganized because there's nobody in a uniform running things. But their on-the-spot improvisations and local knowledge often make them more effective than a more impressive-looking operation made up of people in uniforms. [10]

So while Chief Moose and the other talking heads were holding press conferences in which they castigated the press for reporting information, they should have been figuring out how to take advantage of the vast resources that a mobilized public can command. But the officials didn't want to, for fear of "vigilantes." Luckily for them, a leak saved the day.

Regardless of whether or not the D.C. snipers count as "terrorists" under your particular definition (they do under mine, but the authorities seem to have been shooting for a much narrower standard), there seems little question that in coming years we're going to be dealing with a lot of fast-moving, dispersed threats of the sort that bureaucracies don't handle very well. (Every dramatic domestic-terrorism victory so far, from Flight 93 to bringing down the LAX shooter to spotting the D.C. killers was accomplished by non-law-enforcement individuals). Rather than creating new bureaucracies, we need to be looking at ways of promoting fast-moving, dispersed responses, responses that will involve members of the public as a pack, not a herd. Even if doing so reduces the career satisfaction of the shepherds.

As David Brin points out, the trend over the past century was to put this sort of thing into the hands of "official" organizations. But with technology empowering people in new ways, that's changing, and it's time we changed our approaches to take account of this difference. [11] I hope that people in Washington are paying attention to this. But the evidence so far isn't too encouraging. On the other hand, some people are catching on.

Responding to the 9/11 Commission's report released in 2004, J. B. Schramm wrote in the Washington Post. ''A first review of the Sept. 11 commission's report indicates that the system failed, but that is wrong. While the U.S. air defense system did fail to halt the attacks, our improvised, high-tech citizen defense 'system' was extraordinarily successful .... " The important question, according to Schramm, is not "How did the government/CIA/ FAA fail us?" But rather, "How did the networked citizens on the ground and in the sky save US?" [12]

We shouldn't let this fact make us overconfident, of course. Structural advantages are a wonderful thing, but no one is invincible. However, as we look at how to order our society in the wake of September 11, and with the prospect of other disasters as unanticipated now as the September 11 attacks were on September 10, we should not lose sight of what it is that makes us strong -- the flexibility and decentralization that make American society great, and that drive bureaucrats nuts. Bureaucrats like centralization and control. But even fundamentalist terrorists can outthink bureaucrats. It's up to the rest of us to make sure that neither the terrorists nor the bureaucrats get their way.


When I've written on this subject in the past, readers request that I write on what, specifically, individual citizens can do to prepare for a role in responding to, and preventing, terrorism. Okay.

I will say up front, though, that although I'm totally in favor of individual citizens taking the initiative to prepare themselves, such self-help measures would do more good if the federal and state governments actually took a role in encouraging and facilitating them. But if you want to get a leg up on the process before the much slower bureaucracy gets the ball rolling -- if it ever does -- here are a few things you can do to help. Odds are that you'll never use them or even come close to needing them. Terrorist attacks are pretty rare. But you'll probably never need your smoke detector either. And, anyway, many of these skills and behaviors may turn out to be useful otherwise.

Prevention: Where terrorism is concerned, an ounce of prevention is worth a metric ton of cure. But what can you do to prevent terrorism?

Well, you can't intercept Al Qaeda communications unless you're an unusually skilled cyberwarrior of the sort discussed above. But terrorists tend to give off warning signals before they strike: they profess sympathy to AI Qaeda (a pretty good giveaway), they make threats, they brag to strippers, and they engage in other behaviors that don't add up. In the past, people have failed to report these warning signs for fear of seeming prejudiced. Those days are over, I think, and you should certainly be prepared to report to authorities things that seem odd -- especially as you, unlike the authorities, needn't worry too much about being charged with ethnic profiling. (Whether the authorities will listen or not is another question -- they didn't where John Muhammad was concerned -- but there's only so much you can do about that.)

Aside from reporting any potential terrorists you might run across at strip clubs ("Honey, I was just protecting 'homeland security!'" probably won't work as an excuse), you can maintain situational awareness, especially in public places like airports, shopping malls, and so on. Jeff Cooper's book Principles of Personal Defense [13] contains a number of games and mental exercises designed to promote that sort of awareness. Short of that, just get into the habit of noticing what's going on around you. Scan for people who look suspicious or are acting oddly, unattended bags or packages, and so on. (For practice, try to notice something distinctive about each person you see -- a tattoo, a crooked nose, whatever. Really look at people instead of just skimming the crowd.)

Also, consider what you'd do if you saw something unusual. Obviously that depends on what you see -- if you see a guy pulling out a gun, you're not going to have time to call security -- but if you see an unattended package you probably will. But you should know whom to call, and what to say, or what to do if there's not time to call anyone. No need to get obsessive, but do playa few of these scenarios out in your mind and you'll be prepared if the situation actually comes up.

Response: How do you prepare to respond once it's too late to prevent something? Carrying a cell phone is something anyone can do, and experiences ranging from Flight 93 to the more recent Moscow theater incident and the London bombings demonstrate that having people on the scene with cell phones is enormously valuable. Be prepared to report what's going on dearly and concisely. Think about what information is valuable to authorities trying to respond -- exactly what you're seeing, how many people are in the area, how many terrorists (if any) are present, how they're armed, and so on. (Example: "There are four guys wearing black, they've shot several people, and they're carrying AK-47s and pistols" is a lot more useful than "There are some guys shooting!" or "Help! It's terrorists!")

If you can legally carry a gun, you may want to consider doing so on a regular basis. But remember that there's nothing magical about a gun. If you're going to carry it, you need to be good at hitting what you shoot at, and -- just as important -- you need to practice in situations that will help you formulate judgment about when and how to shoot. Training courses along these lines are available most places, and if you're planning to carry a gun regularly they're a good idea. (In fact, given the woeful nature of most law enforcement officers' training and practice, if you take one of these courses and practice regularly, you may actually be better prepared than many of the professionals.) Of course, many places forbid guns -- and, not surprisingly, they're often prime terrorist targets. So you may want to brush up on your unarmed-combat skills too. Courses in those are even more common and provide good healthy exercise anyway.

Preparation: Sadly, many terrorist events will involve things that no degree of prior awareness or self-defense skill will do much to prevent. Terrorists, not exactly paragons of bravery or fair play, tend to choose methods that are hard to stop by such means: bombs, for example. Unless you spot the bomb or bomber in time for people to be evacuated, you probably won't be able to do much in response until after it goes off.

So brush up on your first aid skills too. If there's a mass shooting, or if a bomb goes off, help will be on the way within minutes. But "minutes" can be a very long time in the aftermath of a bomb or a shooting. The Red Cross and other organizations offer first aid courses, though most of them focus more on responding to isolated individual accidents than in dealing with the massive trauma that often occurs after a terrorist attack. (Maybe these courses should be updated.) I once took an advanced course that did cover this sort of thing (along with a lot of other stuff I hope I'll never use, like improvised traction and bone setting), however such training is a bit harder to find. But simply applying direct pressure to wounds and keeping airways dear can go a long way toward keeping someone alive until more advanced help comes.

Getting in the habit of having a video camera or small still camera around can be helpful too, as I suggested earlier. If it's a cell-phone camera, you may even be able to send pictures to the authorities right away. Photos during, or in the immediate aftermath of, a terrorist attack may well reveal useful information, as well as making you a temporary celebrity -- and perhaps a few bucks. Just be sure your batteries are charged! (And don't get so interested in taking pictures that you forget to duck.)

And what about your home? The disruptions caused by terrorist attacks tend to be short-lived, but anyone should be ready to live without power, food, or water for at least a few days. The Red Cross website has a list of recommendations for disaster preparedness that is a good starting point. Gas masks and Geiger counters are, it seems to me, overkill unless you live next to a hazardous waste facility or somesuch. If you disagree, lots of places on the Web offer to sell and advise on this kind of merchandise. But being able to take care of yourself, your family, and perhaps a few others for a week or more is a good idea and will do much to ease the burden on disaster services.

These recommendations just scratch the surface, of course, but they should at least point you in the right direction. Many of them will also prove useful even if you never encounter a terrorist: being aware of your surroundings may prevent a rape or mugging (both more likely, statistically, than terrorism anyway); having emergency supplies at home will payoff in the event of a blizzard, hurricane, earthquake, or other natural disaster. Perhaps most importantly, if you formulate the habits of mind that will keep you alert and focused in an emergency (instead of paralyzed or panicky), you improve your odds in all sorts of unfortunate situations, regardless of whether terrorism is involved. Even before 9/11, the "leave it to the professionals" approach to safety and security was obviously a bad idea. And that will remain true even after the last Al Qaeda sympathizer is pushing up daisies. Let's just hope that the government catches on to this, sooner or later, and offers the kind of support that will move these suggestions from the category of "self-help" to the category of "national defense."


There are some promising signs in that direction at the moment. A recent article from the Christian Science Monitor describes a trend toward terrorism vigilance -- mostly by volunteer groups -- in the years since September 11. Pennsylvania has been training citizens, ranging from business owners to members of the Rotary Club, in antiterrorism preparedness and response since 2002. Over sixty thousand have received courses in how to recognize terrorism and how to respond. [14]

At the federal level, the Coast Guard has set up "America's Waterway Watch," [15] encouraging recreational boaters and maritime workers to be alert to any suspicious behavior that might indicate possible terrorism. Volunteers are trained to be wary when people pay cash to rent boats, don't take bait when "fishing," and show inordinate interest in things like naval bases or chemical plants. Sounds suspicious to me, all right. Similarly, the Air Force's "Eagle Eyes Program" trains people who live and work on and around air bases to be aware of suspicious conduct. [16] And Highway Watch is a program organized by the Department of Homeland Security and the American Trucking Association to get truckers to recognize and report suspicious activity -- especially important given that a truck, particularly one loaded with gasoline or other dangerous cargo, can be a dangerous terrorist weapon all by itself. [17] Sometimes this kind of thing helps. A few years ago, a truck driver noted that twenty-five boxes set to be shipped to Saudi Arabia contained suspicious information; he tipped off authorities, and it turned out that the shipment did have terrorist connections. [18]

Meanwhile, guarding against another sort of threat entirely, NASA is enlisting amateur astronomers to help search the skies for killer asteroids so that we'll know they're coming in time to prepare. This recruitment of amateur astronomers is relatively new, though these "non-experts" have been researching asteroids for a while. Much of the collaboration, as one reporter notes, occurs on an Internet message group called the Minor Planet Mailing List. The group boasts over eight hundred members and is run by Richard Kowalski, "a forty-year-old baggage handler at US Airways in Florida by day and an astronomer by night." [19]

Harnessing the passion and persistence of such amateurs seems a smart way to deal with a diffuse but real threat like killer asteroids. And it's made possible by a world in which technology and economic growth allow a forty-year-old baggage handler to own a telescope setup better than many universities would have possessed a few decades ago. ("Kowalski observes the skies through an eleven-inch, computer-driven telescope. He houses it in a backyard garden shed with a retractable roof. Amateur setups like his can cost as much as $25,000; but, like most amateurs, Kowalski put it together himself, without the benefit of NASA endowments." [20])

We're still a long way from the sort of broad-based disaster preparedness I propose above, but this is a start. And, in some ways, we may be closer -- even without government programs -- than we realize.


I've just been reading Steve Stirling's recent novel, Dies the Fire, [21] in which every piece of technology more sophisticated than a waterwheel or a crossbow quits working. In Stirling's story, lots of people die, of course, but civilization doesn't, quite. And though some might find the extent to which his leading characters are able to draw on expertise gathered via the Society for Creative Anachronism and various back-to-the-land hippie movements a bit convenient, I actually know many people with those sorts of skills. And there seem to be a lot more floating around out there. Oust look at the website for the Roman reenactment group, the XXIVth Legion [22] -- and be sure to check out the ballista page.) It's almost as if, as we move up the technological curve, interest in old innovations is growing.

Why is that? Cultural explanations no doubt exist for why geeks in particular are fascinated with obsolete technologies, but it's certainly the case that any gathering of geeks or science fiction fans will find a lot of people interested in old technologies: from arms and armor, to brewing and viticulture, to seafaring and agriculture.

It's not just geeks, by any means, who make a hobby of such undertakings. All kinds of people find such archaic arts interesting and apply their surplus time and money to them. As a side effect, though, we have a large bank of people possessing all sorts of skills (and not just the out-of-date kind, but modern skills like astronomy or obscure languages) that aren't especially useful now. But they might be someday.

This is the real lesson. We have such a diversified collection of skills because our society is rich enough and free enough that people have leisure time for such pursuits. No plausible government program could prepare us adequately for the kind of unlikely cataclysm Stirling envisions -- but, in fact, if we should ever find ourselves needing people who can construct a lorica segmentata, we've got them. In fact, thanks to the wonders of the free market, such folks are already supporting themselves, without government money. (See, for example, the website of Albion Arms, which will happily sell you a lorica segmentata or a broadsword, for a substantial sum. [23])

A society that's rich and free will have citizens who -- entirely on their own -- develop a wide range of skills. Most of these skills will never provide more than hobby-level amusement for their owners, but in the aggregate they provide a resource that could not easily be developed through any sort of government program. And that's a kind of disaster preparedness too. The kind that's not available to a herd.

Of course, sometimes we get a herd, not a pack, and the disgraceful behavior of the looters (and the not-very-admirable behavior of the people who refused either to prepare themselves or to evacuate the city) produced nasty results in New Orleans after Hurricane Katrina. Part of this is, of course, that the citizens with skills, resources, and public spirit did mostly evacuate, leaving the city occupied by those with none of these. Such a situation also underscores the point that some sort of infrastructure -- whether created by the government or someone else -- helps a lot. People often self-organize, but it's easier to do under some circumstances than others.

Such self-organization did happen in New Orleans, in neighborhoods where community ties were stronger. In the French Quarter, for instance, people formed "tribes" and divided up the various chores required to survive the recent hardships. An excerpt from an Associated Press account shows how effective these informal groups were:

As some went down to the river to do the wash, others remained behind to protect property. In a bar, a bartender put near-perfect stitches into the torn ear of a robbery victim.

While mold and contagion grew in the muck that engulfed most of the city, something else sprouted in this most decadent of American neighborhoods -- humanity.

"Some people became animals," Vasilioas Tryphonas said Sunday morning as he sipped a hot beer in Johnny White's Sports Bar on Bourbon Street. "We became more civilized." [24]

For residents of the French Quarter, loyalty to an established neighborhood -- and familiarity with each other -- made this sort of thing easier. (Likewise, in Houston, armed citizens banded together to prevent looting after Hurricane Rita. [25]) Rich societies, like richer neighborhoods, will generally have more of this sort of mutual trust and cooperation than poor ones. But it's something we should foster everywhere, not only as a good in itself but because it helps to protect society from all sorts of problems -- including, and perhaps especially, the kinds of problems that nobody even foresees today.
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 10:24 pm


Nothing is so unsettling to a social order as the presence of a mass of scribes without suitable employment and an acknowledged status. -- ERIC HOFFER [1]

Zeyad was a twenty-eight-year-old dental student in Baghdad. He had never worked as a journalist, but American journalist-blogger Jeff Jarvis found his weblog, Healing Iraq, and liked it. Jarvis, the president of Conde Nast's Internet division and a huge fan of Iraqi and Iranian bloggers, had Federal Expressed him a digital camera the week before, paying more in shipping than the camera cost. Zeyad was still learning to use it when he covered a mammoth antiterrorist/anti-Baath demonstration in Baghdad, posting pictures to his blog. [2]

Over twenty thousand people marched. Western media ignored the story, but in spite of this neglect, Zeyad's pictures and reporting attracted the notice of Americans. Hundreds of thousands saw his reports on the Internet, and the next week the Weekly Standard reprinted them, photos and all. [3] It was a swift move: from an obscure website to coveted print real estate in less than a week. Even more striking, the left-leaning webzine Salon was inspired to run a story on how Zeyad had "scooped" the New York Times, which had published a context-less photo from the march but otherwise ignored it. [4] Before the Internet, and blogs, the Times' omission would have kept us ignorant, but this time it left the Times embarrassed and readers aware that stories were going unreported.


The Zeyad story points up a typical pattern in the relationship between Big Media and blogs. Before Zeyad embarrassed the Times, bloggers had noticed remarks made by then Senate Majority Leader Trent Lott at Strom Thurmond's 100th birthday party, remarks suggesting that Lott would have preferred to see the segregationist Dixiecrat Party (on whose ticket Thurmond had run) win the presidency in 1948. Although these comments were made at a gala event with numerous reporters in attendance, they weren't reported in the news until several days later, after bloggers on both the left and right had made a stink. By the time it was over, Lott was an ex-majority leader. [5]

During the 2004 election, blogs and online media played a major role both in spotting stories that the Big Media had missed and in correcting stories that the Big Media got wrong. The most famous example involved the so-called "RatherGate" scandal, in which CBS relied on documents that turned out to have been rather clumsily forged, in a story alleging that President Bush had been given special treatment while serving in the Texas Air National Guard. Another example involved Democratic candidate John Kerry's claim to have been in Cambodia on Christmas Day 1968, which turned out not to be the case either. Yet another involved a false Associated Press report that a pro-Bush crowd had booed former President Bill Clinton when Bush reported that Clinton was having heart surgery. Bloggers who had attended the rally responded with firsthand reports that included audio and video, making it clear that the AP story was false.

These examples are some of the most famous, but focusing on them misses the point, which goes well beyond the occasional scoop. The trouble is encapsulated in Ken Layne's now famous statement that this is the Internet, "and we can fact-check your ass." [6] Where before journalists and pundits could bloviate at leisure, offering illogical analysis or citing "facts" that were in fact false, now the Sunday morning op-eds have already been dissected on Saturday night, within hours of their appearing on newspapers' websites.

Annoyance to journalists is the least of this; what is really going on is something much more profound: the end of the power of Big Media.

For almost a hundred years -- from the time William Randolph Hearst pushed the Spanish-American War, to the ascendancy of talk radio in the 1990s -- big newspapers and, later, television networks have set the agenda for public discussion and tilted the playing field in ways that suited their institutional and political interests.

Not anymore. As UPI columnist Jim Bennett notes, what is going on with journalism today is akin to what happened to the Church during the Reformation. [7] Thanks to a technological revolution (movable type then, the Internet and talk radio now), power once concentrated in the hands of a professional few has been redistributed into the hands of the amateur many. Those who do it for money are losing out to those who (mostly) do it for fun.

Beware the people who are having fun competing with you!

Nonetheless, weblogs are not likely to mark the end of traditional media, any more than Martin Luther marked the end of the popes. Yet the Protestant Reformation did mark an end to the notion of unchallenged papal authority, and it seems likely that the blog phenomenon marks the beginning of the end to the tremendous power wielded by Big Media in recent years. Millions of Americans who were once in awe of the punditocracy now realize that anyone can do this stuff -- and that many unknowns can do it better than the lords of the profession.

In this we are perhaps going full circle. Prior to the Hearst era -- and even, to a degree, prior to World War II -- Big Media power was countervailed by other institutions: political parties, churches, labor unions, even widespread political discussion groups. The blog phenomenon may be viewed as the return of such influences -- a broadening of the community of discourse to include, well, the community.

And it's possible that blogs will have a greater influence than these earlier institutions for a simple reason: they're addictive, and many of the addicts are mainstream journalists, who tend to spend a lot of time surfing the Web and who like to read about themselves and their colleagues. This means that blog criticism may have a more immediate impact than might otherwise be the case.

If so, it will be a good thing. Americans' trust in traditional Big Media has been declining for years, as people came to feel that the news they were getting was distorted or unreliable. Such distrust, while a natural phenomenon, can't be a good thing over the long term. In this, as in other areas, competition is the engine that will make things better.


And it had better. For the sad truth is that although bloggers are often criticized for producing more opinion than original reporting (some critics call them "parasites" on Big Media's hard-news reporting), even top-of-the-line mainstream news institutions like the New 10rk Times are becoming more like the bloggers all the time, reducing staffs, cutting the size and number of foreign bureaus, and relying more and more on wire services for original reporting to which they add commentary and "news analysis" (it's "value added" rather than parasitism when Big Media does it). But the real appeal of this reduction to management is that it's cheap, while reporting is expensive. Decades of cost cutting and corporate consolidation at newspapers, magazines, and television networks have caused them to sharply reduce their core competency of news gathering and reporting. [8] Where they used to have bureaus in all sorts of places, now they don't. Like the industrial beer makers, they've watered down their product over a series of individually imperceptible cost-cutting stages, until suddenly it's reached a point where a lot of people have noticed that it lacks substance and flavor. That opens an opportunity for a widely dispersed network of individuals to make a contribution.

Traditionally, the big things that mainstream journalism offers are reach and trustworthiness. Critics of media bias may joke about the latter, but though reporters for outlets like Reuters or the New York Times may -- and do -- slant their reporting from time to time, their affiliation with institutions that have a long-term interest in reputation limits how far they can go. When you rely on a report from one of those journalistic organs, you're relying, for better or worse, on their reputation. And when they ask you to believe their reports, they're relying on their reputations too.

But big institutions aren't the only way to have a reputation anymore. As Web-based outfits like and Slashdot are demonstrating, it's possible to have reputation without bureaucracy. Want to know whether you can rely on what someone says? Click on his profile and you can see what other people have said about him, and what he's said before, giving you a pretty good idea of his reliability and his biases. That's more than you can do for the person whose name sits atop a story in the New York Times (where, as with many Big Media outfits, archives are pay-only and feedback is limited).

An organization that put together a network of freelance journalists under a framework that allowed for that sort of reputation rating, and that paid based on the number of pageviews and the ratings that each story received, would be more like a traditional newspaper than a blog, but it would still be a major change from the newspapers of today. Interestingly, it might well be possible to knit together a network of bloggers into the beginnings of such an organization. With greater reach and lower costs than a traditional newspaper, it might bring something new and competitive to the news business.


In the meantime, we tend to see this dynamic mostly when bloggers self-organize around a particular big event: the Indian Ocean tsunami, hurricanes or terror attacks in the United States, and so on. Like the "flash crowds" that gather by text-message and email, the bloggers swarm around a topic and then disperse.

This "flash media" coverage does a lot of good. Sometimes -- as in the Trent Lott case, documented in a lengthy case study by Harvard's Kennedy School of Government, or in Iraqi blogger Zeyad's coverage of pro-democracy rallies in Baghdad, [9] scooping the New York Times -- this sort of coverage gets Big Media entities interested. But even when Big Media snubs such coverage, bloggers let hundreds of thousands of people read about, see, and sometimes even experience via video a story that they would otherwise miss.

I don't think that weblogs and flash media will replace Big Media any time soon. But I keep seeing evidence that they're doing a better and better job of supplementing, and challenging, Big Media coverage. I think that's a wonderful thing, and it's one reason why I'm such an evangelist for the spread of enabling technologies like Web video and cheap digital cameras. The more people there are with these sorts of things, the more of a role there will be for flash media in covering news, and for more sophisticated ways of drawing this sort of coverage together on a more routine basis. Just another thing for the Old Media guys to worry about.

The end result of the blog revolution is to create what blogger Jim Treacher calls "we-dia." [10] News and reporting used to be something that "they" did. Now it's something that we all do. This is sure to irritate the traditional press, which has always seemed to favor exclusivity -- just read any of the journalism trade papers for an example of the guild mentality that seems to pervade the field -- but it may also save press freedom from the problems created by the press.

I worry that freedom of the press -- which in its modern extent is basically a creature of the post-World War II Supreme Court -- is likely to be at risk if people see it as merely a special-interest protection for a news-media industry that is producing defective products that do harm.

But, as Alex Beam notes in the Boston Globe, media folks often encourage such a view, by failing to stand up for the free-speech rights of non-Big-Media folks:

Apple Computer sued 19-year-old journalist Nicholas Ciarelli in January for disclosing trade secrets on his Apple news website Think Secret. A typical Think Secret annoyance: The site correctly predicted the appearance of the Mac Mini, a small, low-cost Macintosh computer, two weeks before the product was officially announced.

Ciarelli is accused of doing exactly what reporters all over America are supposed to be doing: finding and publishing information that institutions don't want to reveal....

Where are the always-vocal guardians of the First Amendment? Where is the American Civil Liberties Union? Where is the American Society of Newspaper Editors? Where, for that matter, is Harvard's Nieman Foundation. [11]

Apparently, Ciarelli's status as "non-traditional media" has cost him support. But that's a mistake. Big Media outfits have been squandering their credibility and public regard for decades (see, for example, Dan Rather and Jayson Blair, or the exaggerated stories of death and lawlessness after Hurricane Katrina [12]), and I suspect that this is likely to put free-press protections at risk. It's easier to support freedom of the press when you think the press is responsible. Ironically, their greatest hope for salvation is for lots of nontraditional media to get involved in publishing too, giving the public at large a greater stake in freedom of the press.


If Americans regard press freedom as someone else's protection, they're likely to be much cooler toward the First Amendment than if they regard press freedom as their own. And that sense of ownership is more likely to develop if the explosion of self-published Internet media, often sniffed at by traditional media folks, continues. If Big Media is to be saved, it may be Little Media that is responsible.

Another question is whether Little Media can be saved from itself. Some people, invoking the usually sad fate of email lists and online bulletin boards, wonder if Web journalism is doomed to be overrun by trolls and flamers who ruin things for everyone else. I think the answer is no. In legal and economic analysis, a "commons" is a resource that anyone can use. The classic example is the common grazing field shared by everyone in a village. As long as there's enough to go around, its common character is a benefit: there's no need to waste time dividing it up and assigning rights when there's enough for everyone.

The problem is when there are more people wanting to use the resource than it can support. Everyone could just cut back -- but since there's no guarantee that other users will cut back, a rational user won't cut back but will try to grab as much as possible before someone else gets it. Grazing becomes overgrazing in a hurry under these circumstances, and everyone is worse off. Soon, there's nothing to do but to move elsewhere, as the previously settled area becomes a desert. The classic term for this problem is "the tragedy of the commons," after a famous article by that name. [13]

This model wouldn't seem to fit the Web very well, though. There aren't many commonly held resources, and most of them aren't really limited. Bandwidth, maybe, in shared networks, but that's pretty easy to address. (Actually, the use of overall Net bandwidth for spam may fall into the "overgrazing" category, but that's a topic for another day.)

But if there's one scarcity that everyone will agree on, it's time. Napoleon told his generals, ''Ask me for anything but time," but he didn't know the half of it. For my own blog, I try to get around to as many sites as possible, but it's a hopeless effort: the number of new sites is expanding far faster than I can follow. And email is worse. I get hundreds of emails.

But that difference -- between visiting sites and receiving email -- is one reason why I think that the blog world, and the new journalism that resembles it, won't succumb to the tragedy of the commons the way that email has. Think about an email list: everyone can post freely to the list, but by doing so they consume readers' time. In a sense, there's a common pool of reading hours available, determined by the number of hours the average reader is willing to devote to mail from the list, multiplied by the number of readers. Each post to the list consumes some of that time, but at minimal cost to the poster in relation to the amount of time consumed. And the bigger the list, the greater the payoff (other people's time consumed) versus the cost (the poster's time).

Left to themselves, then, you'd expect that email lists and similarly structured systems would succumb to a tragedy of the commons: excessive posting that consumes so much time that people abandon them and they die. (As a corollary, it would seem likely that the people whose time is the least valuable will post the most -- since they incur the lowest cost in doing so -- and if you assume that their time is less valuable because they're, well, dumb or crazy, then the more posts you see, the lower their likely value.) This does seem to describe the fate of many email listservs, which start out well, with a few members, flourish and grow for a time, but then degenerate into flamefests and collapse. A similar phenomenon seems to affect chat rooms, message boards, and the like. Some people are suggesting that even well-established sites like Slashdot may suffer from this kind of thing, though I think the jury is still out on that one.

So, despite all the blogosphere hype, is the world of blogs headed the same way? It could be, but I'm going to predict that it isn't. The reason is that people who post on blogs can't commandeer the time of others: nobody will read their stuff except voluntarily since -- unlike email on a listserv -- reading a weblog requires a deliberate act. As a blog reader, you control your time; as the member of an email list, you don't. So although individual blogs may collapse into Usenet-style flaming, they'll either lose their audiences or accumulate a reader base that wants to read flaming, in which case it's not really flaming -- for our purposes -- at all.

As Nick Denton says: "[T]his is the way to deal with flamers: let them post on their own damn sites. And then let everyone else ignore them. Weblogs are a gigantic interlinked discussion forum, in which it's trivially easy to route around idiots." [14]

It's another example of what some people (well, Jeff Jarvis, and now me) are calling Jarvis's Laws of Media:

Jarvis's First Law: Give the people control of media, they will use it. The corollary: Don't give the people control of media, and you will lose. Jarvis's Second Law: Lower cost of production and distribution in media inevitably leads to nichefication. The corollary: Lower the cost of media enough, and there will be an unlimited supply of people making it. [15]

I think that he's right, and that the implications go beyond routing around idiots. And so, I suspect, does Jonathan Peterson, who wrote:

At a very fundamental level, the Big Content companies don't understand the revolution that is happening in the digital media realm. They still see us as consumers only capable of digesting their offerings and handing over money. They really don't seem to understand that the reason we are buying PCs, video cameras, digital cameras, broadband connections and the like is that we want to create and share our creations. The quality of "amateur" content is exploding at the same time that Big Media companies are going through one of their all-time lows in music and television creativity. No wonder we're spending more time with our PCs than we are with our TVs. [16]

And when "making" media is cheap, and an unlimited supply of people are "making it," what happens to journalism? Something that journalists may not like: Journalism, right now, is in the process of reverting to its earlier status as an activity, rather than a profession.

Which brings me to my last prediction. Actually, it's one I've made before: "[I]f Big Media let their position go without a fight to keep it by fair means or foul, they'll be the first example of a privileged group that did so. So beware." In the wake of the humiliation visited on Big Media by such debacles as RatherGate, I think we're already beginning to see signs of that backlash, complete with the growth of alarmist articles (like a recent cover story in Forbes) on the dangers posed by bloggers. [17] And the press establishment's general lack of enthusiasm for free speech for others (as evidenced by its support for campaign finance "reform,") suggests that it'll be happy to see alternative media muzzled. Big Media outfits haven't been very enthusiastic about extending the "media exemption" of the McCain-Feingold campaign finance "reform" act to bloggers, for example. You want to keep this media revolution going? Be ready to fight for it. I think people will be. Am I too optimistic? We'll see.

I could write more about the role of blogs in changing politics and media, but that task has been admirably performed by Dan Gillmor in we the Media, [18] Joe Trippi in The Revolution Will Not Be Televised, [19] and especially by Hugh Hewitt in his book Blog: Understanding the Information Reformation That's Changing Your World. [20] But what I can do is give you an insight into some of the people who are going beyond blogging and into independent journalism -- doing the kind of thing, thanks to technology, that only Big Media employees used to be able to do.

One of them is J. D. Johannes, whose blog, Faces from the Front (, and documentary has attracted a lot of attention. There's been lots of unhappiness with media reporting from Iraq. But where people used to just complain about that, now people are doing something about it. Johannes is one of them. I interviewed him recently.

Reynolds: What's your project all about? How did you come up with the idea?

Johannes: The project is about telling a story that otherwise would have gone untold. The story of one platoon of Marines, all of which are volunteers, as they root out insurgents in Iraq's Al Anbar province. The story is told through three mediums: Web, at local TV news stations in Kansas and Missouri, and a long form documentary for local PBS tentatively titled "Outside The Wire." Washburn University has partnered with me for the documentary, making me an adjunct professor in the Military and Strategic Studies Department. The PBS station, though licensed to the university, has been a challenge.

Local TV affiliates were not going to Fallujah to follow a group of Reserve Marines from their area. Local PBS stations were not going to Fallujah to produce a long form documentary about Reserve Marines from their area. The big networks were never going to cover a group of Reserve Marines from Kansas City. The daily newspapers were not going to cover them. The story of the courage, dignity, and compassion of this group of Marines would never be told, unless I went.

The only time local stations or newspapers cover a local Marine or soldier is if they die in combat. That is an outrage.

The idea sprang to me in late December 2004. I was dissatisfied with the coverage of the war. I had an idea turning in my head about how coverage could be improved through syndication targeted for 10cal1V markets. My college roommate and occasional business crony, David Socrates, thought the idea was a bit crazy, but jumped on board. When I learned that an infantry platoon from my former Marine Reserve unit was being deployed to Iraq, everything became clear. I knew what I had to do. I did it because I knew no one else would. A lot of others could, but no one else would.

Reynolds: How has technology played a role in letting you do this sort of thing? Would it have been possible twenty years ago? If possible, feasible?

Johannes: This project, the way it was thrown together, would not have been possible ten years ago. The major technological leap forward is in the low cost availability of 3CCD cameras that shoot broadcast-quality video and off-the-shelf video editing software that rivals television production equipment. Ten years ago, a production quality camera would cost $25,000-$40,000. The editing equipment would have been a Video Toaster or two bulky decks and two bulky monitors. The total cost being around $100,000. But now, it's $4,000 for a 3CCD camera, $1,000 for Adobe Premiere software plus features, and $2,000 for a laptop computer. We ship video from Iraq using a combination of FedEx and NorSat KU band satellite transmissions. Neither of which would have been feasible twenty years ago.

The Web end of things obviously wouldn't have worked twenty years ago. Ten years ago, the Web part of the project would have been slower, with fewer and shorter video clips. The facesfromthefront. com website would not be nearly as rich in content ten years ago; server space would have been too expensive.

Ten or fifteen or twenty years ago, a person with a large, well-established video production company could be doing what we are doing. But a small company or a start-up company? Not a chance. The initial capital investment would have been too great.

Reynolds: Do you see a trend toward independent news gathering and filmmaking of this sort? Should the Big Media folks be worried, or should they see it as an opportunity?

Johannes: The technological trend should result in more independent news gathering and filmmaking. I once had a conversation with Hernando De Soto about how technology made the U.S. look so different from parts of Europe, especially in the red states. In Kansas, where I live, everything is spread out. The roads are wide, suburbs and cities distant, and the farms massive. The railroads were the first bit of technology that allowed this, as cities were built around rail terminals. Then came cars. The cities of the American Midwest were built for cars. Farms expanded as the tools of production (tractors, combines, etc.) became larger allowing one person to do more work.

The availability of the cameras, recorders, affordable server space, and affordable software will open up the news game to more people.

Over time, news gathering will reflect the technology that makes it available, but the Big Media will resist it. Not the business end of Big Media, they will adopt it, but the reporters, producers, and editors will resist it.

The second phase, and this will be the angle TV is likely to take, is in specialized syndication.

Every local TV station has a "Statehouse" reporter. What makes these reporters so special that their coverage should be respected? Nothing, other than they work for an identifiable and reliable media outlet.

Do they have any special knowledge of law, politics, government, economics, policy, etc.? No. They have a bachelor's degree in mass media or journalism, possibly the worst education possible outside of a teaching degree.

I worked in television for four years producing newscasts every day. These reporters are some of the least equipped individuals to be covering important topics that affect people's lives. And in TV news, performance abilities are rewarded more often than analytical ones.

And there is a "paying your dues" aspect to TV news. Everyone must start at the bottom and work their way up, unless they have a patron or a well-placed uncle.

The concept of some guy with a camera being able to produce stories and analysis superior to that of the Big Media is a threat to the status quo, and humans hate threats to the status quo, especially if it affects their livelihood.

The news directors and producers would be incredulous at the idea of some lawyer covering the statehouse. That would be an infringement on their turf.

But upper management could see the economy of scale. If one man and a camera could cover the statehouse under a syndicated contract for $6,000 and get one station in four markets to buy in, he could make $24,000 a year for working just six months. If he had something else on the side, he could make a respectable living.

The resistance would not come from upper management, but from the news director, who would see this freelance interloper as an invader. In a newspaper, the same resistance would come from the lesser editors.

Indeed, I experienced this firsthand a few times with the Iraq project. But most original news coverage by bloggers resembles first person rambles, not news. A mere change in style would go a long way.

Because most bloggers are hobbyists, serious citizen journalist hobbyists, they are not able to devote the resources necessary to original reporting. The bloggers provide the best background information and in-depth analysis, but they rarely produce fresh news.

When enough bloggers take the leap, and start reporting on the statehouse, city council, courts, etc., firsthand, full-time, then the Big Media will take notice and the avalanche will begin .... If it can be done in Iraq, it can be done in statehouses and city hall.

Johannes is right. Technology has made all sorts of things possible. Twenty years ago, or even ten, it took a huge infrastructure to allow one guy in a safari jacket to report from places like Baghdad and pretend he knew what was going on there. Now it can be a do-it-yourself project, and unlike the "bigfoot" reporters of major media, who tend to drop in for a few days and then move on, the do-it-yourselfer is more likely to stay on the ground long enough to actually learn what's going on firsthand. This is probably bad news for terrorism, which is an information warfare operation disguised as a military one, and one that is based on taking advantage of the kind of reporting (hysterical and shallow, for the most part) that traditional mass media tend to do.

I suspect that the growth of guerrilla media -- ranging from operations like Faces from the Front, to reporting by freelancers like Michael Yon (interviewed below), to reports from Iraqi bloggers and even emails from soldiers -- has made the terrorists' task tougher, as the reporting is by people who are much closer to what's really going on and are much more closely connected to their audiences.

I also agree that the local-reporting angle is likely to be big. Most media coverage is wide but shallow. Individuals can actually outperform big news organizations when it comes to reporting on a single topic, and as it becomes easier for individuals to develop and market niche expertise, we'll see more of that. How will Big Media respond? It will be interesting to find out.

Meanwhile, another journalist, Michael Yon, is covering Iraq in a different way at his blog ( His first person reporting reads like Ernie Pyle's, and he often takes photos in the midst of combat. I interviewed him, too, to see what he thinks about the new approach to news gathering.

Reynolds: Please tell me a bit about your background, and how you decided to embark on this project.

Yon: I was born and raised in Florida, where I learned at a young age how to successfully hunt, kill, and eat alligators much larger than I am. I was different than the other boys in that my favorite three subjects were physics, physics, and physics. I also was very serious about sports, mainly because I was small and got beat up by my big brother a lot, and wanted to put an end to that, which I eventually did. I joined the Army for university tuition. I volunteered and was selected for Special Forces, which I enjoyed immensely, except that I hated wearing uniforms. After running several businesses, I started to write, more as a way to get perspective than as the first step toward finding out that what I most enjoy is traveling the world, exploring fascinating places, and writing about them. As for Iraq, I maintain friendships with former Special Forces teammates and other service members, most of whom are still active duty. The war is a major event for this and future generations. I had, and continue to have, complex and sometimes contradictory opinions about this war. What made me embark on this project was the need to see things firsthand, to find out for myself what is going on, what it means, and how it is going to affect all of us for a very long time.

Reynolds: What are you trying to accomplish with your reporting? What will the final result be? A book?

Yon: I am chronicling my observations of this war over an extended period. My independence is important on many levels. I am beholden to no agency and I don't need to produce copy on a deadline. So I can write about what I am seeing and take time to do so properly. Journalists of many sorts fly through here for short times, and there are a handful of semi-permanent reporters from a few majors such as CNN and Time. Some of these are good and serious folks, but I think they are hobbled by working for agencies and are not free to roam and follow their instincts. Being completely independent allows freedom to roam the battlefield from north to south, from Iran to Syria, and to describe without filters what I see. The events in Iraq are singularly critical to the futures of billions of people. Given that such incredible events are taking place, and that I am committed to being here as long as I still have unanswered questions ... definitely, I will write a book.

Reynolds: What kind of a role does technology play in making your reporting possible? Could you have done this sort of thing twenty years ago?

Yon: The Internet makes wide and near-instantaneous reporting simple. Also, satellite and cell phones in Iraq allow for real-time reporting by nearly anyone. I do not "report" in real time -- I am not actually a reporter -- but am able to post dispatches that are being read all around the world. I think a generation earlier my background might have afforded access that the embedded reporter system now grants just about any reporter, journalist, or filmmaker. But the military's attitude toward the media has changed almost as dramatically as the technology around communications has developed. So I might have been able to tag along and observe and later write a book about my experiences, but I definitely couldn't have blogged it.

Reynolds: Do you see independent reporting as the future of news? What role do you think it will play? Should Big Media folks be worried, or should they see it as an opportunity?

Yon: I don't think anyone can predict the future of news. Some question whether it's even really still news in the classic Edward R. Murrow sense. Clearly we are shaking the tree where the Big Media has been perched. The "little guys" are increasingly not so little, they have grasped the power of the Web, and they have increasing credibility and exposure.

It's still a little wild in the streets in terms of what passes for credible information. Sometimes blogs seem like the transcripts for radio talk shows. But lately mainstream media is getting the story leads for Iraq from independents and bloggers. I get contacted frequently by an assortment of big players such as the New York Times, Washington Post, LA Times, FOX, and just a couple weeks ago I "scooped" a major story from the grips of CNN (quite by accident).

When I want firsthand and nitty-gritty information about an area in Iraq, I search for bloggers in that area and then decide for myself if they sound credible. For firsthand information in Iraq, the best sources definitely are not mainstream media, all of which have become fixated with counts: numbers of car bombings, numbers of dead, numbers of insurgents captured, etc. But for real stories, the majors have lost the battle in Iraq. There is no question that the best sources for detailed information in Iraq tend to be bloggers. Mainstream media straggles further behind every day.

Should they be worried? If they really care about the legacy of solid journalism, probably yes. But if they only care about the bottom line, they are probably already thinking up some "reality TV" version of the news, maybe some program where they gather bloggers from around the world, put them in a wired house, and film them finding and reporting news....

Reynolds: You write in a personal voice, more like the old-time reporting of Ernie Pyle than like most modern war correspondents. Why did you decide to take that approach? Is it part of reporting in your own name?

Yon: This is the easiest question to answer. Firstly, I never studied journalism, so I have little frame of reference past or present. I write in first person because I am actually there at the events I write about. When I write about the bombs exploding, or the smell of blood, or the bullets snapping by, and I say "I," it's because I was there. Yesterday a sniper shot at us, and seven of my neighbors were injured by a large bomb. These are my neighbors. These are soldiers I have borrowed camera gear from (soldiers who have better photo gear than I have). These are the people who risk their lives for me. I see them bleed, I see them die, I see them cry for their friends, and then I see them go right back our there on missions, and I see them caring for Iraqi people and killing the enemy. I feel the fire from the explosions, and am lucky, very lucky, still to be alive. Everything here is first person.

Yes, it is. And that first-person character is one of the strengths of the independent journalism that the Internet and other technologies make possible. Over the coming decade, we'll see the growth of alternatives to traditional Big Media, and -- if we and Big Media are lucky -- we'll see Big Media moving to ally itself with the Davids, rather than positioning itself against them.

We've seen a few signs of that. After the Indian Ocean tsunami, and again after hurricanes like Katrina and Rita, we've seen newspapers and television stations incorporate citizen journalism into their coverage via blogs, chat boards, and other mechanisms. In a crisis, the value of having thousands of potential correspondents out there with computers, digital cameras, and other technology is obvious. But in fact, the value is there all the time. Noticing that may take them a bit longer, but I suspect that they will notice it in the end. Those who don't may wind up being replaced by those who do.
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am

Re: An Army of Davids: How Markets and Technology Empower

Postby admin » Sat Nov 02, 2013 10:31 pm



So what makes a blog good? First the inevitable, though sincere, dodge: it depends. Blogs come in many different flavors and styles -- though political and tech blogs get the most attention, there are many other varieties (including the huge but largely ignored mass of gay blogs), and what makes one good or bad naturally varies accordingly. What's more, there's a way in which blogging, like jazz, always succeeds: if it's reflecting the feelings of the blogger, it's a success on some level, regardless of whether anyone else likes it. (There's only one hard-and-fast rule: Get rid of the typos. No blog that's full of typos looks good.)

But that said, there are some things that, in my opinion, make good blogs good. And the most important of those things are (1) a personal voice and (2) rapid response times. By this token, some blogs aren't really full-fledged blogs: my MSNBC web log,, has a personal voice, but since MSNBC's antiquated publishing platform means that I have to email my entries in and then wait hours until they appear on the site, it doesn't offer the kind of rapid response -- and on-the-fly editing and revision -- that more typical blogs, powered by things like Movable Type, Blogger, or WordPress, offer. On my InstaPundit blog, which is powered by Movable Type, I can post something, think better of it moments later, and change it, or add an update in response to a reader email that comes in sixty seconds after it's posted. I can't do that at -- a significant lack in a medium that thrives on lively forums of cumulative dialogue and witty repartee.

On the other hand, a number of house blogs have rapid response, but no personal voice.'s blog, for example, used to be timely and interesting, but anonymously institutional -- so they added names. The same is true for the American Prospect's blog, Tapped, and the New Republic's blog, &c. And the 2004 presidential campaign blogs were tedious -- basically just a series of press releases.

By contrast, the National Review Online house blog, The Corner, features signed entries by many different NRO writers and rather a lot of back-and-forth disagreement and personal reflection, which makes it far livelier and far "bloggier" than its more staid competitors. The Huffington Post is an online blog-collective that's all about having named contributors. The same is true for Reason's house blog, Hit&Run, which also has signed entries and considerably more life to it than the anonymous house blogs. It's no wonder that the latter have become less common.

So while you can have an anonymous "institutional" blog with rapid response, it's bound to have an institutional voice, which isn't as interesting or, I suspect, as much fun for the writers. The Corner, and to a lesser degree Hit&Run, seem to attract posts at all hours too, while the anonymous institutional blogs seem to operate on a more 9-5 or, really, 10-2 basis, with postings not only less personalized, but less frequent. I suspect that means that the anonymous house blogs feel like work to their writers, rather than like self-expression. So the personal voice seems to be awfully important to good blogging, and to frequent blogging. (Note that you can have a personal voice in an anonymous blog, but not in an anonymous institutional one -- there are plenty of anonymous non-institutional blogs with strong personal voices.)

Then, most importantly, there is the link. And here, I'll quote James Lileks:

A wire story consists of one voice pitched low and calm and full of institutional gravitas, blissfully unaware of its own biases or the gaping lacunae in its knowledge. Whereas blogs have a different format: Clever teaser headline that has little to do with the actual story, but sets the tone for this blog post. Breezy ad hominem slur containing the link to the entire story. Excerpt of said story, demonstrating its idiocy (or brilliance). Blogauthor's remarks, varying from dismissive sniffs to a Tolstoi-Iength rebuttal. Seven comments from people piling on, disagreeing, adding a link, acting stupid, preaching to the choir, accusing choir of being Nazis, etc.

I'd say it's a throwback to the old newspapers, the days when partisan slants covered everything from the play story to the radio listings, but this is different. The link changes everything. When someone derides or exalts a piece, the link lets you examine the thing itself without interference. TV can't do that. Radio can't do that. Newspapers and magazines don't have the space. My time on the Internet resembles eight hours at a coffeeshop stocked with every periodical in the world -- if someone says "I read something stupid" or "there was this wonderful piece in the Atlantic" then conversation stops while you read the piece and make up your own mind. [1]

When hypertext for computers was first invented (lawyers invented hypertext for paper back in the Middle Ages, but that's another topic) the inventors thought it would revolutionize discourse, and it has. People who write on dead trees can still (sort of) get away with mangling quotes to produce a desired meaning -- though bloggers will quickly call them on it -- but bloggers tend to link to original sources wherever possible. The result, as Lileks says, is that you can follow the link and make up your mind for yourself A blog that doesn't have links is less interesting. The link isn't a guarantee of accuracy, of course-the source you're linking to can always be wrong -- but it does let the readers evaluate the source themselves.

The best links, usually, are to things the reader would never have found otherwise. Fred Pruitt's Ramburg blog specializes in interesting information from obscure military and regional sources. Meanwhile Caterina Fake's blog -- probably my favorite of the largely non-political, day-in-the-life blogs -- has posts on things like what to do in Finland and is full of links and reader comments. In both cases, the selection of links has to do with the "personal voice" thing: Fred and Caterina are (very) different people. Both have built blogs around their own knowledge and interests, instead of trying to imitate someone else, and the result, in both cases, has been something very interesting and useful indeed.

Bloggers who don't unearth unusual news, on the other hand, can still stand out and contribute by having -- as James Lileks does -- a unique perspective on the stories people have already read. In this light it's no surprise that bloggers who are successful generally bring something special to the table. For famous journalist-bloggers, it's their journalistic smarts and connections, as evidenced in the work of Mickey Kaus, Virginia Postrel, and Josh Marshall.

For some, like Lileks, it's that they just flat-out write better than anyone else. Still others, like the various bloggers at The Volokh Conspiracy, or Howard Bashman at How Appealing, or Jeralyn Merritt at TalkLeft, offer academic or legal expertise. For others, like famed Baghdad blogger Salam Pax, or Zeyad, or the various bloggers who chronicled the "Orange Revolution" in Ukraine, it's their proximity to events. (And local blogging, I think, is something that's likely to take off, since it provides something -- knowledge of one's hometown -- that's comparatively scarce and hard for others to match.)

In every case, though, what brings success is knowing something other people don't know and expressing it well.

All of this means, of course, that if you came to this chapter looking for blogging secrets, well, there aren't any. The key to good blogging is simple: have something interesting to say, and say it well. Kind of like every other sort of writing -- just faster, and with links. There's nothing new about that, but it's still a kind of magic, as good writing always is.
Site Admin
Posts: 35790
Joined: Thu Aug 01, 2013 5:21 am


Return to Glenn Reynolds

Who is online

Users browsing this forum: No registered users and 3 guests