Green Illusions, by Ozzie Zehner

Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 12:38 am

Green Illusions: The Dirty Secrets of Clean Energy and the Future of Environmentalism
by Ozzie Zehner
University of Nebraska Press, 2012
© 2012 by Ozzie Zehner

NOTICE: THIS WORK MAY BE PROTECTED BY COPYRIGHT

YOU ARE REQUIRED TO READ THE COPYRIGHT NOTICE AT THIS LINK BEFORE YOU READ THE FOLLOWING WORK, THAT IS AVAILABLE SOLELY FOR PRIVATE STUDY, SCHOLARSHIP OR RESEARCH PURSUANT TO 17 U.S.C. SECTION 107 AND 108. IN THE EVENT THAT THE LIBRARY DETERMINES THAT UNLAWFUL COPYING OF THIS WORK HAS OCCURRED, THE LIBRARY HAS THE RIGHT TO BLOCK THE I.P. ADDRESS AT WHICH THE UNLAWFUL COPYING APPEARED TO HAVE OCCURRED. THANK YOU FOR RESPECTING THE RIGHTS OF COPYRIGHT OWNERS.




Image

“In this terrific book, Ozzie Zehner explains why most current approaches to the world’s gathering climate and energy crises are not only misguided but actually counterproductive. We fool ourselves in innumerable ways, and Zehner is especially good at untangling sloppy thinking. Yet Green Illusions is not a litany of despair. It’s full of hope—which is different from false hope, and which requires readers with open, skeptical minds.”

— David Owen, author of Green Metropolis

“Think the answer to global warming lies in solar panels, wind turbines, and biofuels? Think again. . . . In this thought-provoking and deeply researched critique of popular ‘green’ solutions, Zehner makes a convincing case that such alternatives won’t solve our energy problems; in fact, they could make matters even worse.”

—Susan Freinkel, author of Plastic: A Toxic Love Story

“There is no obvious competing or comparable book. . . . Green Illusions has the same potential to sound a wake-up call in the energy arena as was observed with Silent Spring in the environment, and Fast Food Nation in the food system.”

—Charles Francis, former director of the Center for Sustainable Agriculture Systems at the University of Nebraska

“This is one of those books that you read with a yellow marker and end up highlighting most of it.”

—David Ochsner, University of Texas at Austin


Contents:

• List of Illustrations
o 1. Solar system challenges
o 2. An imposing scale
o 3. Road infiltrates a rainforest
o 4. Mississippi River dead zone
o 5. Entering Hanford
o 6. A four-story-high radioactive soufflé
o 7. In the wake of Chernobyl
o 8. Flaring tap
o 9. Reclaiming
o 10. Prioritizing bicycle traffic
• List of Figures
o 1. California solar system costs
o 2. Solar module costs do not follow Moore’s law
o 3. Fussy wind
o 4. Five days of sun
o 5. U.S. capacity factors by source
o 6. Secret U.S. government document ornl–341
o 7. Clean coal’s lackluster potential
o 8. Media activity during oil shock
o 9. Incongruent power plays
o 10. Congruent power plays
o 11. Global world population
o 12. Differences in teen pregnancy and abortion
o 13. Similarity in first sexual experience
o 14. American food marketing to children
o 15. GDP versus well-being
o 16. Trips by walking and bicycling
o 17. Walking and bicycling among seniors
o 18. U.S. energy flows
o 19. Passive solar strategies
o Table: The present and future of environmentalism
• Acknowledgments
• Introduction: Unraveling the Spectacle
• Part I: Seductive Futures
1. Solar Cells and Other Fairy Tales
2. Wind Power’s Flurry of Limitations
3. Biofuels and the Politics of Big Corn
4. The Nuclear-Military-Industrial Risk Complex
5. The Hydrogen Zombie
6. Conjuring Clean Coal
7. Hydropower, Hybrids, and Other Hydras
• Part II: From Here to There
1. The Alternative-Energy Fetish
2. The First Step
o Part III: The Future of Environmentalism
1. Women’s Rights
2. Improving Consumption
3. The Architecture of Community
4. Efficiency Culture
5. Asking Questions
• Epilogue: A Grander Narrative?
• Resources for Future Environmentalists
• Notes
• Index

[I]ndividual sacrifices don’t hold tremendous potential in the larger scheme of things since corporations, the government, and the military leave the largest energy footprints.

-- Green Illusions: The Dirty Secrets of Clean Energy and the Future of Environmentalism, by Ozzie Zehner
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 1:00 am

Acknowledgments

To begin, I’d like to extend special thanks to numerous anonymous individuals who risked their standing or job security to connect me with leads, offer guidance, spill dirt, and sneak me into places I perhaps shouldn’t have been. These include one World Bank executive, one member of Congress, one engineer at General Motors, two marketing executives, one former teen celebrity, two political strategists in Washington dc, two military contractors, a high school vice principal, one solar industry executive, one solar sales rep, one mining worker, and three especially helpful security guards.

I also extend thanks to those organizations that generously released confidential reports, which I had worked on, so I might draw upon their findings in this very public setting. I appreciate the cooperation from numerous Department of Energy employees, who provided everything from images to insight on internal decisionmaking. This book would not have been possible without brave individuals from the World Bank, U.S. military, academia, and industry who were involved in whistle-blowing, industrial espionage, and leaks exposing wrongdoings. Their courage reminds us there are rules that are provisional and rules that are not.

My enthusiastic and keen agent, Uwe Stender, generously pursued a nonprofit press deal while offering me limitless support and advice. I'd like to thank the editors and staff at the University of Nebraska Press, including Heather Lundine, Bridget Barry, Joeth Zucco, and Cara Pesek, who believed in this work enough to buy the rights, support this book in staff meetings, coordinate expert reviews, considerably improve the manuscript, and put up with this green writer. Special thanks go to Karen Brown, whose copyediting cut a path through my writing for readers to follow.

I’d especially like to mention the generous support of numerous educators, colleagues, and advisers starting with John Grin, Loet Leydesdorff, Stuart Blume, Chunglin Kwa, and, in memory, Olga Amsterdamska from the University of Amsterdam; Steven Epstein (Northwestern University), Naomi Oreskes, Katrina Peterson, and Marisa Brandt from the Department of Science Studies at the University of California–San Diego; Joseph Dumit, Mario Biagioli, Tom Beamish, Tim Choy, Jim Griesemer, Caren Kaplan, Colin Milburn, Chris Kortright, and Michelle Stewart from the Department of Science and Technology Studies at the University of California–Davis; Cori Hayden, Cathryn Carson, Hélène Mialet, Mark Robinson, Mark Fleming, Jerry Zee, Mary Murrell, Diana Wear, and Gene Rochlin from the Science, Technology, and Society Center at the University of California–Berkeley; Reginald Bell at Kettering University; Charles Francis at the University of Nebraska; David Ochsner at the University of Texas at Austin; Brian Steele from the former Kalamazoo Academy; the dedicated staff at Nature Bridge Headlands Institute; and the Science, Technology, and Society faculty at the Massachusetts Institute of Technology.

I am grateful for the editors who dealt with various rough drafts of this manuscript, from proposal to completion, including especially Frieder Christman, Oleg Makariev, and Joe Clement, as well as the acutely helpful input from three anonymous academic reviewers. I'd also like to thank Saad Padela, Jenny Braswell, Harshan Ramadass, Tayo Sowunmi, Nahvae Frost, Myla Mercer, Judy Traub, Sarah Margolis, Cheryl Levy, Garrett Brown, and Karla L. Topper.

I’d like to thank numerous individuals who assisted me with conceptual and theoretical development: Charlie Zuver, D. A. M., Brad Borevitz, Kyla Schuller, Jeffrey Stevens, Jack Markey, L. Chase Smith, Kien Tran, Babs Mondschein, Yao Odamtten, Leonardo Martinez-Diaz, Drika Makariev, Valera Zakharov, Florence Zakharov, Ariel Linet, Ben Wyskida, Daniel Williford, Jesse Burchfield, Jessica Upson, Nathalie Jones, Nicholas Sanchez, Allison Rooney, Rex Norton, Hilda Norton, Jens Maier, Paul Burow, Santani Teng, Thomas Kwong, Stefanie Graeter, Susan Elliott Sim, Tom Waidzunas, Olivia Iordanescu, James Dawson, Maurice van den Dobbelsteen, Thomas Gurney, Jurjen van Rees, and the many other people who helped out along the way.

Most importantly, I am thankful for the boundless support of my family: Patti Zehner, Tom Zehner, Robby Zehner, Aaron Norton, Sabin Blake, and Randy Shannon.

All errors in judgment, content, or otherwise lie with me.  
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 1:04 am

Introduction: Unraveling the Spectacle

The world will not evolve past its current state of crisis by using the same thinking that created the situation.

–Albert Einstein


If the title of this book makes you a little suspicious of what I’m up to, then all is well. We’ll get along just fine. That’s because the dirty secrets ahead aren’t the kind you can be told (you probably wouldn’t believe me anyway), but rather are the kind you must be shown. But even then, I don’t expect you to accept all of my particular renderings.

Ahead you’ll see that this certainly isn’t a book for alternative energy. Neither is it a book against it. In fact, we won’t be talking in simplistic terms of for and against, left and right, good and evil. I wouldn’t dare bludgeon you with a litany of environmental truths when I suspect you’d rather we consider the far more intriguing questions of how such truths are made. Ultimately, this is a book of shades. This is a book for you and others who like to think.

Ahead, we’ll interrogate the very idea of being for or against energy technologies at all. Many energy debates arise from special interests as they posture to stake flags on the future—flags adorned with the emblems of their favorite pet projects. These iridescent displays have become spectacles in their own right. And oh, how we do delight in a spectacle with our morning coffee. Needless to say, these spectacles influence the answers we get—there is nothing new about this observation—but these energy spectacles do much more. They narrow our focus. They misdirect our attention. They sidetrack our most noble intentions. They limit the very questions we even think to ask.

Consider, for instance, America’s extensive automotive transportation system that, alongside impressive benefits, yields a host of negative side effects such as smog, particulates, co2, and deadly accidents. America’s overwhelming response has been to adjust the technology, the automobile itself. Our politicians, corporations, universities, and the media open their palms to show us an array of biofuel, electric, and hydrogen vehicles as alternatives. But even though these vehicles might not emit toxic fumes directly, their manufacture, maintenance, and disposal certainly do. Even if we could run our suburbs on batteries and hydrogen fuels cells, these devices wouldn’t prevent America’s thirty thousand automobile collision fatalities per year.1 Nor would they slow suburban proliferation or the erosion of civil society that many scholars link to car culture. And it doesn’t seem that people enjoy being in cars much in the first place—40 percent of people say they’d be willing to pay twice the rent to cut their commute from forty-five to ten minutes, and a great many more would accept a pay cut if they could live closer to friends.2

Might we be better served to question the structure and goals of our transportation sector rather than focus so narrowly on alternative vehicles? Perhaps. Yet during times of energy distress we Americans tend to gravitate toward technological interventions instead of addressing the underlying conditions from which our energy crises arise.3 As we shall discover in the chapters to follow, these fancy energy technologies are not without side effects and limitations of their own.

When I speak on energy, the most frequent questions I receive are variants of “What energy technology is best?”—as if there is a straightforward answer. Every energy technology causes aches and pains; shifting to alternative energy represents nothing more than a shift to alternative aches and pains. Still, I find most people are interested in exploring genuine solutions to our energy problems; they’re eager to latch on and advocate for one if given the opportunity. As it turns out, there are quite a few solutions that could use some latching onto. But they’re not the ones you’ll read about in glossy magazines or see on television news—they’re far more intriguing, powerful, and rewarding than that.

In the latter part of this book, we’ll imagine tangible strategies that cross-examine technological politics. But don’t worry, I won’t waste your time with dreamy visions that are politically naïve or socially unworkable. The durable first steps we’ll discuss are not technologically based, but they stand on the same ground—that of human creativity and imagination. And you don’t need to live in any particular location or be trained as an engineer or a scientist (or any other trade for that matter) to take part.

But enough about you.

Who is this author, with the peculiar name? (And if you don’t much care, well then skip the next couple of paragraphs.) Your author fittingly grew up in Kalamazoo—home to numerous quirky Midwesterners, a couple of universities, a pharmaceutical company, and an industrial power plant where, it just so happens, he had a job one summer long ago.

At 4:30 a.m. daily, I would awake in time to skip breakfast, drive to the remote facility, and crawl into a full-body suit designed for someone twice my size, complete with a facemask and headlight. Holding a soot-scraping tool high above my head in the position of a diver, my coworkers plunged me head first into a narrow exhaust manifold that twisted down into the dark crypt where I would work out the day. I remember the weight of silence that lay upon my eardrums and how my scraping would chop it into rhythms, then tunes. I learned the length of eight hours down there. I haven’t forgotten the lunchtime when we gossiped about our supervisor’s affair, or the day my breathing filter didn’t seal properly, or the pain of the rosy mucus I coughed up that night. I was tough then. But at the time, I didn’t know I’d been breathing in asbestos courtesy of a company that has since gone bankrupt. Nor did I realize that my plant’s radiation levels exceeded those inside a nearby nuclear power facility. These were the kind of answers that demanded more questions.

I suspect my summer spent cleaning out the bowels of that beast still informs the questions that attract me, though my work today is much different. Your author is currently an environmental researcher and a visiting scholar at the University of California– Berkeley. As an adviser to organizations, governments, and philanthropists, I deal with the frustration of these groups as they draw upon their resources or notoriety in attempts to create positive change. Sadly, some of them have come to me for assistance after supporting environmental initiatives that actually harmed those they had intended to help. With overwhelming requests pouring in, where can policymakers, professors, business leaders, concerned citizens, voters, and even environmentalists— best direct their energies?

In order to get some answers (and more importantly, find the right questions to ask), I geared up again. But this time I held a pen and notepad above my head as I dove into the underbelly of America’s energy infrastructure to perform a long overdue colonoscopy. What I began to uncover haunted me—unsettling realizations that pulled me to investigate further. I pieced together funding for the first year. A year turned into two, then four—it’s now been a decade since I began crisscrossing the energy world: arctic glaciers, oil fields in the frigid North Sea, turbine manufacturing facilities in Ireland, wind farms in Northern California, sun-scorched solar installations in Africa, biofuel farms in Iowa, unmarked government facilities in New Mexico, abandoned uranium mines in Colorado, modest dwellings in rural China, bullet trains in Japan, walkable villages in Holland, dusty archives in the Library of Congress, and even the Senate and House chambers in Washington DC.

I aimed to write an accessible yet rigorous briefing—part investigative journalism, part cultural critique, and part academic scholarship. I chose to publish with a nonprofit press and donate all author royalties to the underserved initiatives outlined ahead.

While I present a critique of environmentalism in America, I don’t intend to criticize my many colleagues dedicated to working toward positive change. I aim only to scrutinize our creeds and biases. For that reason, you’ll notice I occasionally refer to “the mainstream environmental movement,” an admittedly vague euphemism for a heterogeneous group. Ultimately, we’re all in this together, which means we’re all going to be part of the solution. I’d like to offer a constructive critique of those efforts, not a roadblock. I don’t take myself too seriously and I don’t expect others to either. I ask only for your consideration of an alternate view. And even while I challenge claims to truth making, you’ll see I emerge from the murky depths to voice my own claims to truth from time to time. This is the messy business of constructive argumentation, the limits of which are not lost on me.

Producing power is not simply a story of technological possibility, inventors, scientific discoveries, and profits; it is a story of meanings, metaphor, and human experience as well. The story we’ll lay bare is far from settled. This book is but a snapshot. It is my hope that you and other readers will help complete the story. If after reading, scanning, or burning this book you’d care to continue the dialogue, I’d be honored to speak at your university, library, community group, or other organization (see GreenIllusions.org or OzzieZehner.com). I also invite you to enjoy a complimentary subscription to an ongoing series of environmental trend briefings at CriticalEnvironmentalism.org.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 1:24 am

Part I: Seductive Futures

1. Solar Cells and Other Fairy Tales


Once upon a time, a pair of researchers led a group of study participants into a laboratory overlooking the ocean, gave them free unlimited coffee, and assigned them one simple task. The researchers spread out an assortment of magazine clippings and requested that participants assemble them into collages depicting what they thought of energy and its possible future.1 No cost-benefit analyses, no calculations, no research, just glue sticks and scissors. They went to work. Their resulting collages were telling—not for what they contained, but for what they didn’t.

They didn’t dwell on energy-efficient lighting, walkable communities, or suburban sprawl. They didn’t address population, consumption, or capitalism. They instead pasted together images of wind turbines, solar cells, biofuels, and electric cars. When they couldn’t find clippings, they asked to sketch. Dams, tidal and wave-power systems, even animal power. They eagerly cobbled together fantastic totems to a gleaming future of power production. As a society, we have done the same.

The seductive tales of wind turbines, solar cells, and biofuels foster the impression that with a few technical upgrades, we might just sustain our current energy trajectories (or close to it) without consequence. Media and political coverage lull us into dreams of a clean energy future juxtaposed against a tumultuous past characterized by evil oil companies and the associated energy woes they propagated. Like most fairy tales, this productivist parable contains a tiny bit of truth. And a whole lot of fantasy.

Act I

I should warn you in advance; this book has a happy ending, but the joust in this first chapter might not. Even so, let’s first take a moment to consider the promising allure of solar cells. Throughout the diverse disciplines of business, politics, science, academia, and environmentalism, solar cells stand tall as a valuable technology that everyone agrees is worthy of advancement. We find plenty of support for solar cells voiced by:

Politicians,

If we take on special interests, and make aggressive investments in clean and renewable energy, like Google’s done with solar here in Mountain View, then we can end our addiction to oil, create millions of jobs and save the planet in the bargain.

-– Barack Obama


Textbooks,

Photovoltaic power generation is reliable, involves no moving parts, and the operation and maintenance costs are very low. The operation of a photovoltaic system is silent, and creates no atmospheric pollution. . . . Power can be generated where it is required without the need for transmission lines. . . . Other innovative solutions such as photovoltaic modules integrated in the fabric of buildings reduce the marginal cost of photovoltaic energy to a minimum. The economic comparison with conventional energy sources is certain to receive a further boost as the environmental and social costs of power generation are included fully in the picture.

–- From the textbook, Solar Electricity


Environmentalists,

Solar power is a proven and cost-effective alternative to fossil fuels and an important part of the solution to end global warming. The sun showers the earth with more usable energy in one minute than people use globally in one year.

 –- Greenpeace


And even oil companies,  
Solar solutions provide clean, renewable energy that save you money.

–- BP2


We ordinarily encounter the dissimilar views of these groups bound up in a tangle of conflict, but solar energy forms a smooth ground of commonality where environmentalists, corporations, politicians, and scientists can all agree. The notion of solar energy is flexible enough to allow diverse interest groups to take up solar energy for their own uses: corporations crown themselves with halos of solar cells to cast a green hue on their products, politicians evoke solar cells to garner votes, and scientists recognize solar cells as a promising well of research funding. It’s in everyone’s best interest to broadcast the advantages of solar energy. And they do. Here are the benefits they associate with solar photovoltaic technology:

• CO2 reduction: Even if solar cells are expensive now, they’re worth the cost to avoid the more severe dangers of climate change.

• Simplicity: Once installed, solar panels are silent, reliable, and virtually maintenance free.

• Cost: Solar costs are rapidly decreasing.

• Economies of scale: Mass production of solar cells will lead to cheaper panels.

• Learning by doing: Experience gained from installing solar systems will lead to further cost reductions.

• Durability: Solar cells last an extremely long time.

• Local energy: Solar cells reduce the need for expensive power lines, transformers, and related transmission infrastructure.3

Where the Internet Ends

All of these benefits seem reasonable, if not downright encouraging; it’s difficult to see why anyone would want to argue with them. Over the past half century, journalists, authors, politicians, corporations, environmentalists, scientists, and others have eagerly ushered a fantasmatic array of solar devices into the spotlight, reported on their spectacular journeys into space, featured their dedicated entrepreneurs and inventors, celebrated their triumphs over dirty fossil fuels, and dared to envisage a glorious solar future for humanity.

The sheer magnitude of literature on the subject overwhelms— not just in newspapers, magazines, and books, but also in scientific literature, government documents, corporate materials, and environmental reports—far, far too much to sift through. The various tributes to solar cells could easily fill a library; the critiques would scarcely fill a book bag.

When I searched for critical literature on photovoltaics, Google returned numerous “no results found” errors—an error I’d never seen (or even realized existed) until I began querying for published drawbacks of solar energy. Bumping into the end of the Internet typically requires an especially arduous expedition into the darkest recesses of human knowledge, yet searching for drawbacks of solar cells can deliver you in a click. Few writers dare criticize solar cells, which understandably leads us to presume this sunny resource doesn’t present serious limitations and leaves us clueless as to why nations can’t seem to deploy solar cells on a grander scale. Though if we put on our detective caps and pull out our flashlights, we might just find some explanations lurking in the shadows—perhaps in the most unlikely of places.

Photovoltaics in Sixty Seconds or Less

Historians of technology track solar cells back to 1839 and credit Alexandre-Edmond Becquerel for discovering that certain lightinduced chemical reactions produce electrical currents. This remained primarily an intellectual curiosity until 1940, when solidstate diodes emerged to form a foundation for modern silicon solar cells. The first solar cells premiered just eighteen years later, aboard the U.S. Navy’s Vanguard 1 satellite.4

Today manufacturers construct solar cells using techniques and materials from the microelectronics industry. They spread layers of p-type silicon and n-type silicon onto substrates. When sunlight hits this silicon sandwich, electricity flows. Brighter sunlight yields more electrical output, so engineers sometimes incorporate mirrors into the design, which capture and direct more light toward the panels. Newer thin-film technologies employ less of the expensive silicon materials. Researchers are advancing organic, polymer, nanodot, and many other solar cell technologies.5 Patent activity in these fields is rising.

Despite being around us for so long, solar technologies have largely managed to evade criticism. Nevertheless, there is now more revealing research to draw upon—not from Big Oil and climate change skeptics—but from the very government offices, environmentalists, and scientists promoting solar photovoltaics. I’ll draw primarily from this body of research as we move on.

Powering the Planet with Photovoltaics

When I give presentations on alternative energy, among the most common questions philanthropists, students, and environmentalists ask is, “Why can’t we get our act together and invest in solar cells on a scale that could really create an impact?” It is a reasonable question, and it deserves a reasonable explanation.

Countless articles and books contain a statistic reading something like this: Just a fraction of some-part-of-the-planet would provide all of the earth’s power if we simply installed solar cells there. For instance, environmentalist Lester Brown, president of the Earth Policy Institute, indicates that it is “widely known within the energy community that there is enough solar energy reaching the earth each hour to power the world economy for one year.”6 Even Brown’s nemesis, skeptical environmentalist Bjorn Lomborg claims that “we could produce the entire energy consumption of the world with present-day solar cell technology placed on just 2.6 percent of the Sahara Desert.”7 Journalists, CEOs, and environmental leaders widely disseminate variations of this statistic by repeating it almost ritualistically in a mantra honoring the monumental promise of solar photovoltaic technologies. The problem with this statistic is not that it is flatly false, but that it is somewhat true.

“Somewhat true” might not seem adequate for making public policy decisions, but it has been enough to propel this statistic, shiny teeth and all, into the limelight of government studies, textbooks, official reports, environmental statements, and into the psyches of millions of people. It has become an especially powerful rhetorical device despite its misleading flaw. While it’s certainly accurate to state that the quantity of solar energy hitting that small part of the desert is equivalent to the amount of energy we consume, it does not follow that we can harness it, an extension many solar promoters explicitly or implicitly assume when they repeat the statistic. Similarly, any physicist can explain how a single twenty-five-cent quarter contains enough energy bound up in its atoms to power the entire earth, but since we have no way of accessing these forces, the quarter remains a humble coin rather than a solution to our energy needs. The same limitation holds when considering solar energy.

Skeptical? I was too. And we’ll come to that. But first, let’s establish how much it might actually cost to build a solar array capable of powering the planet with today’s technology (saying nothing yet about the potential for future cost reductions). By comparing global energy consumption with the most rosy photovoltaic cost estimates, courtesy of solar proponents themselves, we can roughly sketch a total expense. The solar cells would cost about $59 trillion; the mining, processing, and manufacturing facilities to build them would cost about $44 trillion; and the batteries to store power for evening use would cost $20 trillion; bringing the total to about $123 trillion plus about $694 billion per year for maintenance.8 Keep in mind that the entire gross domestic product (gdp) of the United States, which includes all food, rent, industrial investments, government expenditures, military purchasing, exports, and so on, is only about $14 trillion. This means that if every American were to go without food, shelter, protection, and everything else while working hard every day, naked, we might just be able to build a photovoltaic array to power the planet in about a decade. But, unfortunately, these estimations are optimistic.

If actual installed costs for solar projects in California are any guide, a global solar program would cost roughly $1.4 quadrillion, about one hundred times the United States gdp.9 Mining, smelting, processing, shipping, and fabricating the panels and their associated hardware would yield about 149,100 megatons of co2.10 And everyone would have to move to the desert, otherwise transmission losses would make the plan unworkable.

That said, few solar cell proponents believe that nations ought to rely exclusively on solar cells. They typically envision an alternative energy future with an assortment of energy sources— wind, biofuels, tidal and wave power, and others. Still, calculating the total bill for solar brings up some critical questions. Could manufacturing and installing photovoltaic arrays with today’s technology on any scale be equally absurd? Does it just not seem as bad when we are throwing away a few billion dollars at a time? Perhaps. Or perhaps none of this will really matter since photovoltaic costs are dropping so quickly.

Price Check

Kathy Loftus, the executive in charge of energy initiatives at Whole Foods Market, can appreciate the high costs of solar cells today, but she is optimistic about the future: “We’re hoping that our purchases along with some other retailers will help bring the technology costs down.”11 Solar proponents share her enthusiasm. The Earth Policy Institute claims solar electricity costs are “falling fast due to economies of scale as rising demand drives industry expansion.”12 The Worldwatch Institute agrees, claiming that “analysts and industry leaders alike expect continued price reductions in the near future through further economies of scale and increased optimization in assembly and installation.”13

At first glance, this is great news; if solar cell costs are dropping so quickly then it may not be long before we can actually afford to clad the planet with them. There is little disagreement among economists that manufacturing ever larger quantities of solar cells results in noticeable economies of scale. Although it’s not as apparent whether they believe these cost reductions are particularly significant in the larger scheme of things. They cite several reasons.

First, it is precarious to assume that the solar industry will realize substantial quantities of scale before solar cells become cost competitive with other forms of energy production. Solar photovoltaic investments have historically been tossed about indiscriminately like a small raft in the larger sea of the general economy. Expensive solar photovoltaic installations gain popularity during periods of high oil costs, but are often the first line items legislators cut when oil becomes cheaper again. For instance, during the oil shock of the 1970s, politicians held up solar cells as a solution, only to toss them aside once the oil price tide subsided. More recent economic turmoil forced Duke Energy to slash $50 million from its solar budget, BP cut its photovoltaic production capacity, and Solyndra filed for Chapter 11 bankruptcy.14 Economists argue that it’s difficult to achieve significant economies of scale in an industry with such violent swings between investment and divestment.

Image
Figure 1: California solar system costs Installed photovoltaic system costs in California remain high due to a variety of expenses that are not technically determined. (Data from California Energy Commission and Solarbuzz)

Second, solar advocates underscore dramatic photovoltaic cost reductions since the 1960s, leaving an impression that the chart of solar cell prices is shaped like a sharply downward-tilted arrow. But according to the solar industry, prices from the most recent decade have flattened out. Between 2004 and 2009, the installed cost of solar photovoltaic modules actually increased—only when the financial crisis swung into full motion over subsequent years did prices soften. So is this just a bump in the downward-pointing arrow? Probably. However, even if solar cells become markedly cheaper, the drop may not generate much impact since photovoltaic panels themselves account for less than half the cost of an installed solar system, according to the industry.15 Based on research by solar energy proponents and field data from the California Energy Commission (one of the largest clearinghouses of experience-based solar cell data), cheaper photovoltaics won’t offset escalating expenditures for insurance, warranty expenses, materials, transportation, labor, and other requirements.16 Low-tech costs are claiming a larger share of the high-tech solar system price tag.

Finally, unforeseen limitations are blindsiding the solar industry as it grows.17 Fire departments restrict solar roof installations and homeowner associations complain about the ugly arrays. Repair and maintenance costs remain stubbornly high. Adding to the burden, solar arrays now often require elaborate alarm systems and locking fasteners; without such protection, thieves regularly steal the valuable panels. Police departments throughout the country are increasingly reporting photovoltaic pilfering, which is incidentally inflating home insurance premiums. For instance, California resident Glenda Hoffman woke up one morning to discover thieves stole sixteen solar panels from her roof as she slept. The cost to replace the system chimed in at $95,000, an expense her insurance company covered. Nevertheless, she intends to protect the new panels herself, warning, “I have a shotgun right next to the bed and a .22 under my pillow.”18

Disconnected: Transmission and Timing

Solar cells offer transmission benefits in niche applications when they supplant disposable batteries or other expensive energy supply options. For example, road crews frequently use solar cells in tandem with rechargeable battery packs to power warning lights and monitoring equipment along highways. In remote and poor equatorial regions of the world, tiny amounts of expensive solar energy can generate a sizable impact on families and their communities. Here, solar cells provide a viable alternative to candles, disposable batteries, and kerosene lanterns, which are expensive, dirty, unreliable, and dangerous.

Given the appropriate socioeconomic context, solar energy can help villages raise their standards of living. Radios enable farmers to monitor the weather and connect families with news and cultural events. Youth who grow up with evening lighting, and thus a better chance for education, are more likely to wait before getting married and have fewer, healthier children if they become parents.19 This allows the next generation of the village to grow up in more economically stable households with extra attention and resources allotted to them.

Could rich nations realize similar transmission-related benefits? Coal power plants require an expensive network of power lines and transformers to deliver their power. Locally produced solar energy may still require a transformer but it bypasses the long-distance transmission step. Evading transmission lines during high midday demand is presumably beneficial since this is precisely when fully loaded transmission lines heat up, which increases their resistance and thus wastes energy to heat production. Solar cells also generate their peak output right when users need it most, at midday on hot sunny days as air conditioners run full tilt. Electricity is worth more at these times because it is in short supply. During these periods, otherwise dormant power facilities, called peaker plants, fire up to fulfill spikes in electrical demand. Peaker plants are more expensive and less efficient than base-load plants, so avoiding their use is especially valuable. Yet analysts often evaluate and compare solar power costs against average utility rates.20 This undervalues solar’s midday advantage. Taken into account, timing benefits increase the value of solar cell output by up to 20 percent.

Transmission and timing advantages of solar electricity led the director of the University of California Energy Institute, Severin Borenstein, to find out how large these benefits are in practice. His conclusions are disheartening.

Borenstein’s research suggests that “actual installation of solar pv [photovoltaic] systems in California has not significantly reduced the cost of transmission and distribution infrastructure, and is unlikely to do so in other regions.” Why? First, most transmission infrastructure has already been built, and localized solar- generation effects are not enough to reduce that infrastructure. Even if they were, the savings would be small since solar cells alone would not shrink the breadth of the distribution network. Furthermore, California and the other thirty states with solar subsidies have not targeted investments toward easing tensions in transmission-constrained areas. Dr. Borenstein took into account the advantageous timing of solar cell output but he ultimately concludes: “The market benefits of installing the current solar pv technology, even after adjusting for its timing and transmission advantages, are calculated to be much smaller than the costs. The difference is so large that including current plausible estimates of the value of reducing greenhouse gases still does not come close to making the net social return on installing solar pv today positive.”21 In a world with limited funds, these findings don’t position solar cells well. Still, solar advocates insist the expensive panels are a necessary investment if we intend to place a stake in the future of energy.

Learning by Doing: Staking Claims on the Future

In the 1980s Ford Motor Company executives noticed something peculiar in their sales figures. Customers were requesting cars with transmissions built in their Japanese plant instead of the American one. This puzzled engineers since both the U.S. and Japanese transmission plants built to the same blueprints and same tolerances; the transmissions should have been identical. They weren’t. When Ford engineers disassembled and analyzed the transmissions, they discovered that even though the American parts met allowable tolerances, the Japanese parts fell within an even tighter tolerance, resulting in transmissions that ran more smoothly and yielded fewer defects—an effect researchers attribute to the prevalent Japanese philosophy of Kaizen. Kaizen is a model of continuous improvement achieved through hands-on experience with a technology. After World War II, Kaizen grew in popularity, structured largely by U.S. military innovation strategies developed by W. Edwards Deming. The day Ford engineers shipped their blueprints to Japan marked the beginning of this design process, not the end. Historians of technological development point to such learning-by-doing effects when explaining numerous technological success stories. We might expect such effects to benefit the solar photovoltaic industry as well.

Indeed, there are many cases where this kind of learning by doing aids the solar industry. For instance, the California Solar Initiative solved numerous unforeseen challenges during a multiyear installation of solar systems throughout the state—unexpected and burdensome administration requirements, lengthened application processing periods, extended payment times, interconnection delays, extra warranty expenses, and challenges in metering and monitoring the systems. Taken together, these challenges spurred learning that would not have been possible without the hands-on experience of running a large-scale solar initiative.22 Solar proponents claim this kind of learning is bringing down the cost of solar cells.23 But what portion of photovoltaic price drops over the last half century resulted from learning-by- doing effects and what portion evolved from other factors?

When Gregory Nemet from the Energy and Resources Group at the University of California disentangled these factors, he found learning-by-doing innovations contributed only slightly to solar cell cost reductions over the last thirty years. His results indicate that learning from experience “only weakly explains change in the most important factors—plant size, module efficiency, and the cost of silicon.”24 In other words, while learning-by-doing effects do influence the photovoltaic manufacturing industry, they don’t appear to justify massive investments in a fabrication and distribution system just for the sake of experience.

Nevertheless, there is a link that Dr. Nemet didn’t study: silicon’s association with rapid advancements in the microelectronics industry. Microchips and solar cells are both crafted from silicon, so perhaps they are both subject to Moore’s law, the expectation that the number of transistors on a microchip will double every twenty-four months. The chief executive of Nanosolar points out, “The solar industry today is like the late 1970s when mainframe computers dominated, and then Steve Jobs and IBM came out with personal computers.” The author of a New York Times article corroborates the high-tech comparison: “A link between Moore’s law and solar technology reflects the engineering reality that computer chips and solar cells have a lot in common.” You’ll find plenty of other solar proponents industriously evoking the link.25

You’ll have a difficult time finding a single physicist to agree. Squeezing more transistors onto a microchip brings better performance and subsequently lower costs, but miniaturizing and packing solar cells tightly together simply reduces their surface area exposed to the sun’s energy. Smaller is worse, not better. But size comparisons are a literal interpretation of Moore’s law. Do solar technologies follow Moore’s law in terms of cost or performance?

No and no.

Proponents don’t offer data, statistics, figures, or any other explanation beyond the comparison itself—a hit and run. Microchips, solar cells, and Long Beach all contain silicon, but their similarities end there. Certainly solar technologies will improve— there is little argument on that—but expecting them to advance at a pace even approaching that of the computer industry, as we shall see, becomes far more problematic.

Solar Energy and Greenhouse Gases

Perhaps no single benefit of solar cells is more cherished than their ability to reduce CO2 emissions. And perhaps no other purported benefit stands on softer ground. To start, a group of Columbia University scholars calculated a solar cell’s lifecycle carbon footprint at twenty-two to forty-nine grams of CO2 per kilowatthour (kWh) of solar energy produced.26 This carbon impact is much lower than that of fossil fuels.27 Does this offer justification for subsidizing solar panels?

Image
Figure 2: Solar module costs do not follow Moore’s law Despite the common reference to Moore’s law by solar proponents, three decades of data show that photovoltaic module cost reductions do not mirror cost reductions in the microelectronics industry. Note the logarithmic scale. (Data from Solarbuzz and Intel)

We can begin by considering the market price of greenhouse gases like CO2. In Europe companies must buy vouchers to emit CO2, which trade at around twenty to forty dollars per ton. Most analysts expect American permits to stabilize on the open market somewhere below thirty dollars per ton.28 Today’s solar technologies would compete with coal only if carbon credits rose to three hundred dollars per ton. Photovoltaics could nominally compete with natural gas only if carbon offsets skyrocketed to six hundred dollars per ton.29 It is difficult to conceive of conditions that would thrust CO2 prices to such stratospheric levels in real terms. Even some of the most expensive options for dealing with CO2 would become cost competitive long before today’s solar cell technologies. If limiting CO2 is our goal, we might be better off directing our time and resources to those options first; solar cells seem a wasteful and pricey strategy.

Unfortunately, there’s more. Not only are solar cells an overpriced tool for reducing CO2 emissions, but their manufacturing process is also one of the largest emitters of hexafluoroethane (C2F6), nitrogen trifluoride (NF3), and sulfur hexafluoride (SF6). Used for cleaning plasma production equipment, these three gruesome greenhouse gases make CO2 seem harmless. As a greenhouse gas, C2F6 is twelve thousand times more potent than CO2, is 100 percent manufactured by humans, and survives ten thousand years once released into the atmosphere.30 NF3 is seventeen thousand times more virulent than CO2, and SF6, the most treacherous greenhouse gas, according to the Intergovernmental Panel on Climate Change, is twenty-five thousand times more threatening. 31 The solar photovoltaic industry is one of the leading and fastest growing emitters of these gases, which are now measurably accumulating within the earth’s atmosphere. A recent study on NF3 reports that atmospheric concentrations of the gas have been rising an alarming 11 percent per year.32

Check the Ingredients: Toxins and Waste

In the central plains of China’s Henan Province, local residents grew suspicious of trucks that routinely pulled in behind the playground of their primary school and dumped a bubbling white liquid onto the ground. Their concerns were justified. According to a Washington Post investigative article, the mysterious waste was silicon tetrachloride, a highly toxic chemical that burns human skin on contact, destroys all plant life it comes near, and violently reacts with water.33 The toxic waste was too expensive to recycle, so it was simply dumped behind the playground— daily—for over nine months by Luoyang Zhonggui High-Technology Company, a manufacturer of polysilicon for solar cells. Such cases are far from rare. A report by the Silicon Valley Toxics Coalition claims that as the solar photovoltaic industry expands,

little attention is being paid to the potential environmental and health costs of that rapid expansion. The most widely used solar PV panels have the potential to create a huge new wave of electronic waste (e-waste) at the end of their useful lives, which is estimated to be 20 to 25 years. New solar PV technologies are increasing cell efficiency and lowering costs, but many of these use extremely toxic materials or materials with unknown health and environmental risks (including new nanomaterials and processes).34


For example, sawing silicon wafers releases a dangerous dust as well as large amounts of sodium hydroxide and potassium hydroxide. Crystalline-silicon solar cell processing involves the use or release of chemicals such as phosphine, arsenic, arsine, trichloroethane, phosphorous oxychloride, ethyl vinyl acetate, silicon trioxide, stannic chloride, tantalum pentoxide, lead, hexavalent chromium, and numerous other chemical compounds. Perhaps the most dangerous chemical employed is silane, a highly explosive gas involved in hazardous incidents on a routine basis according to the industry.35 Even newer thin-film technologies employ numerous toxic substances, including cadmium, which is categorized as an extreme toxin by the U.S. Environmental Protection Agency and a Group 1 carcinogen by the International Agency for Research on Cancer. At the end of a solar panel’s usable life, its embedded chemicals and compounds can either seep into groundwater supplies if tossed in a landfill or contaminate air and waterways if incinerated.36

Are the photovoltaic industry’s secretions of heavy metals, hazardous chemical leaks, mining operation risks, and toxic wastes especially problematic today? If you ask residents of Henan Province, the answer will likely be yes. Nevertheless, when pitted against the more dangerous particulate matter and pollution from the fossil-fuel industry, the negative consequences of solar photovoltaic production don’t seem significant at all. Compared to the fossil-fuel giants, the photovoltaic industry is tiny, supplying less than a hundredth of 1 percent of America’s electricity. 37 (If the text on this page represented total U.S. power supply, the photovoltaic portion would fit inside the period at the end of this sentence.) If photovoltaic production grows, so will the associated side effects.

Further, as we’ll explore in future chapters, even if the United States expands solar energy capacity, this may increase coal use rather than replace it. There are far more effective ways to invest our resources, ways that will displace coal consumption—strategies that will lessen, not multiply, the various ecological consequences of energy production. Yet we have much to discuss before coming to those—most immediately, a series of surprises.

Photovoltaic Durability: A Surprise Inside Every Panel

The United Arab Emirates recently commissioned the largest cross-comparison test of photovoltaic modules to date in preparation for building an ecometropolis called Masdar City. The project’s technicians installed forty-one solar panel systems from thirty-three different manufacturers in the desert near Abu Dhabi’s international airport.38 They designed the test to differentiate between cells from various manufacturers, but once the project was initiated, it quickly drew attention to something else—the drawbacks that all of the cells shared, regardless of their manufacturer.

Solar cell firms generally test their panels in the most ideal of conditions—a Club Med of controlled environments. The realworld desert outside Masdar City proved less accommodating. Atmospheric humidity and haze reflected and dispersed the sun’s rays. Even more problematic was the dust, which technicians had to scrub off almost daily. Soiling is not always so easy to remove. Unlike Masdar’s panels, hovering just a few feet above the desert sands, many solar installations perch high atop steep roofs. Owners must tango with gravity to clean their panels or hire a stand-in to dance for them. Researchers discovered that soiling routinely cut electrical output of a San Diego site by 20 percent during the dusty summer months. In fact, according to researchers from the photovoltaic industry, soiling effects are “magnified where rainfall is absent in the peak-solar summer months, such as in California and the Southwest region of the United States,” or in other words, right where the prime real estate for solar energy lies.39

When it comes to cleanliness, solar cells are prone to the same vulnerability as clean, white dress shirts; small blotches reduce their value dramatically. Due to wiring characteristics, solar output can drop disproportionately if even tiny fragments of the array are blocked, making it essential to keep the entire surface clear of the smallest obstructions, according to manufacturers. Bird droppings, shade, leaves, traffic dust, pollution, hail, ice, and snow all induce headaches for solar cell owners as they attempt to keep the entirety of their arrays in constant contact with the sunlight that powers them. Under unfavorable circumstances, these soiling losses can climb to 80 percent in the field.40

When journalists toured Masdar’s test site, they visited the control room that provided instant energy readouts from each company’s solar array. On that late afternoon, the journalists noted that the most productive unit was pumping out four hundred watts and the least productive under two hundred. All of the units were rated at one thousand watts maximum. This peak output, however, can only theoretically occur briefly at midday, when the sun is at its brightest, and only if the panels are located within an ideal latitude strip and tilted in perfect alignment with the sun (and all other conditions are near perfect as well). The desert outside Masdar City seems like one of the few ideal locations on the planet for such perfection. Unfortunately, during the midday hours of the summer, all of the test cells became extremely hot, up to 176 degrees Fahrenheit (80°C), as they baked in the desert sun. Due to the temperature sensitivity of the photovoltaic cells, their output was markedly hobbled across the board, right at the time they should have been producing their highest output.41 So who won the solar competition in Masdar City? Perhaps nobody.

In addition to haze, humidity, soiling, misalignment, and temperature sensitivity, silicon solar cells suffer an aging effect that decreases their output by about 1 percent or more per year.42 Newer thin-film, polymer, paint, and organic solar technologies degrade even more rapidly, with some studies recording degradation of up to 50 percent within a short period of time. This limitation is regularly concealed because of the way reporters, corporations, and scientists present these technologies.43

For instance, scientists may develop a thin-film panel achieving, say, 13 percent overall efficiency in a laboratory. However, due to production limitations, the company that commercializes the panel will typically only achieve a 10 percent overall efficiency in a prototype. Under the best conditions in the field this may drop to 7–8.5 percent overall efficiency due just to degradation effects.44 Still, the direct current (dc) output is not usable in a household until it is transformed. Electrical inverters transform the dc output of solar cells into the higher voltage and oscillating ac that appliances and lights require. Inverters are 70–95 percent efficient, depending on the model and loading characteristics. As we have seen, other situational factors drag performance down even further. Still, when laboratory scientists and corporate pr teams write press releases, they report the more favorable figure, in this case 13 percent. Journalists at even the most esteemed publications will often simply transpose this figure into their articles. Engineers, policy analysts, economists, and others in turn transpose the figure into their assessments.

Image
Illustration 1: Solar system challenges The J. F. Williams Federal Building in Boston was one of the earliest Million Solar Roofs sites and the largest building-integrated array on the East Coast. As with most integrated systems, the solar cells do not align with the sun, greatly reducing their performance. In 2001 technicians replaced the entire array after a system malfunction involving electrical arcing, water infiltration, and broken glass. The new array has experienced system-wide aging degradation as well as localized corrosion, delamination, water infiltration, and sudden module failures. (Photo by Roman Piaskoski, courtesy of the U.S. Department of Energy)

With such high expectations welling up around solar photovoltaics, it is no wonder that newbie solar cell owners are often shocked by the underwhelming performance of their solar arrays in the real world. For example, roof jobs may require that they disconnect, remove, and reinstall their rooftop arrays. Yet an even larger surprise awaits them—within about five to ten years, their solar system will abruptly stop producing power. Why? Because a key component of the solar system, the electrical inverter, will eventually fail. While the solar cells themselves can survive for twenty to thirty years, the associated circuitry does not. Inverters for a typical ten-kilowatt solar system last about five to eight years and therefore owners must replace them two to five times during the productive life of a solar photovoltaic system. Fortunately, just about any licensed electrician can easily swap one out. Unfortunately, they cost about eight thousand dollars each.45

Free Panels, Anyone?

Among the CEOs and chief scientists in the solar industry, there is surprisingly little argument that solar systems are expensive.46 Even an extreme drop in the price of polysilicon, the most expensive technical component, would do little to make solar cells more competitive. Peter Nieh, managing director of Lightspeed Venture Partners, a multibillion-dollar venture capital firm in Silicon Valley, contends that cheaper polysilicon won’t reduce the overall cost of solar arrays much, even if the price of the expensive material dropped to zero.47 Why? Because the cost of other materials such as copper, glass, plastics, and aluminum, as well as the costs for fabrication and installation, represent the bulk of a solar system’s overall price tag. The technical polysilicon represents only about a fifth of the total.

Furthermore, Keith Barnham, an avid solar proponent and senior researcher at Imperial College London, admits that unless efficiency levels are high, “even a zero cell cost is not competitive.” 48 In other words, even if someone were to offer you solar cells for free, you might be better off turning the offer down than paying to install, connect, clean, insure, maintain, and eventually dispose of the modules—especially if you live outside the remote, dry, sunny patches of the planet such as the desert extending from southeast California to western Arizona. In fact, the unanticipated costs, performance variables, and maintenance obligations for photovoltaics, too often ignored by giddy proponents of the technology, can swell to unsustainable magnitudes. Occasionally buyers decommission their arrays within the first decade, leaving behind graveyards of toxic panels teetering above their roofs as epitaphs to a fallen dream. Premature decommissioning may help explain why American photovoltaic electrical generation dropped during the last economic crisis even as purported solar capacity expanded.49 Curiously, while numerous journalists reported on solar infrastructure expansion during this period, I was unable to locate a single article covering the contemporaneous drop in the nation’s solar electrical output, which the Department of Energy quietly slid into its annual statistics without a peep.

The Five Harms of Solar Photovoltaics

Are solar cells truly such a waste of money and resources? Is it really possible that today’s solar technologies could be so ineffectual? Could they even be harmful to society and the environment?

It would be egotistically convenient if we could dismiss the costs, side effects, and limitations of solar photovoltaics as the blog-tastic hyperbole of a few shifty hacks. But we can’t. These are the limitations of solar cells as directly reported from the very CEOs, investors, and researchers most closely involved with their real-world application. The side effects and limitations collected here, while cataclysmically shocking to students, activists, business people, and many other individuals I meet, scarcely raise an eyebrow among those in the solar industry who are most intimately familiar with these technologies.

Few technicians would claim that their cells perform as well in the field as they do under the strict controls of a testing laboratory; few electricians would deny that inverters need to be replaced regularly; few energy statisticians would argue that solar arrays have much of an impact on global fossil-fuel consumption; few economists would insist we could afford to power the planet with existing solar technologies. In fact, the shortcomings reported in these pages have become remarkably stable facts within the communities of scientists, engineers, and other experts who work on solar energy. But because of specialization and occupational silo effects, few of these professionals capture the entire picture at once, leaving us too often with disjointed accounts rather than an all-inclusive rendering of the solar landscape.

Collected and assembled into one narrative, the costs, side effects, and limitations of solar photovoltaics become particularly worrisome, especially within the context of our current national finances and limited resources for environmental investments. The point is not to label competing claims about solar cells as simply true or false (we have seen they are a bit of both), but to determine if these claims have manifested themselves in ways and to degrees that validate solar photovoltaics as an appropriate means to achieve our environmental goals.

I’d like to consider some alternate readings of solar cells that are a bit provocative, perhaps even strident. What if we can’t simply roll our eyes at solar cells, shrugging them off as the harmless fascination of a few silver-haired hippies retired to the desert? What if we interpret the powerful symbolism of solar cells as metastasizing in the minds of thoughtful people into a form that is disruptive and detrimental?

First, we could read these technologies as lucrative forms of misdirection—shiny sleights of hand that allow oil companies, for example, to convince motorists that the sparkling arrays atop filling stations somehow make the liquid they pump into their cars less toxic. The fact that some people now see oil companies that also produce solar cells as “cleaner” and “greener” is a testament to a magic trick that has been well performed. Politicians have proven equally adept in such trickery, holding up solar cells to boost their poll numbers in one hand while using the other to palm legislation protecting the interests of status quo industries.

Second, could the glare from solar arrays blind us to better alternatives? If solar cells are seen as the answer, why bother with less sexy options? Many homeowners are keen on upgrading to solar, but because the panels require large swaths of unobstructed exposure to sunlight, solar cells often end up atop large homes sitting on widely spaced lots cleared of surrounding trees, which could have offered considerable passive solar benefits. In this respect, solar cells act to “greenwash” a mode of suburban residential construction, with its car-dependent character, that is hardly worthy of our explicit praise. Sadly, as we shall see ahead, streams of money, resources, and time are diverted away from less visible but more durable solutions in order to irrigate the infertile fields of solar photovoltaics.

Third, might the promise of solar cells act to prop up a productivist mentality, one that insists that we can simply generate more and more power to satisfy our escalating cravings for energy? If clean energy is in the pipeline, there is less motivation to use energy more effectively and responsibly.50

Fourth, we could view solar photovoltaic subsidies as perverse misallocations of taxpayer dollars. For instance, the swanky Honig Winery in Napa Valley chopped down some of its vines to install $1.2 million worth of solar cells. The region’s utility customers paid $400,000 of the tab. The rest of us paid another 30 percent through federal rebates. The 2005 federal energy bill delivered tax write-offs of another 30 percent. Luckily, we can at least visit the winery to taste the resulting vintages, but even though you’re helping pay their electric bill, they’ll still charge you ten dollars for the privilege. Honig is just one of several dozen wineries to take advantage of these government handouts of your money, and wineries represent just a handful of the thousands of industries and mostly wealthy households that have done the same.

Fifth, photovoltaic processes—from mineral exploration to fabrication, delivery, maintenance, and disposal—generate their own environmental side effects. Throughout the photovoltaic lifecycle, as we have reviewed, scientists are discovering the same types of short- and long-term harms that environmentalists have historically rallied against.

Finally, it is worth acknowledging that there are a few places where solar cells can generate an impact today, but for the most part, it’s not here. Plopping solar cells atop houses in the well-trimmed suburbs of America and under the cloudy skies of Germany seems an embarrassing waste of human energy, money, and time. If we’re searching for meaningful solar cell applications today, we’d better look to empowering remote communities whose only other options are sooty candles or expensive disposable batteries—not toward haplessly supplementing power for the wine chillers, air conditioners, and clothes dryers of industrialized nations. We environmentalists have to consider whether it’s reasonable to spend escalating sums of cash to install primitive solar technologies when we could instead fund preconditions that might someday make them viable solutions for a greater proportion of the populace.

Charting a New Solar Strategy

Current solar photovoltaic technologies are ineffective at preventing greenhouse gas emissions and pollution, which is especially disconcerting considering how rapidly their high costs suck money away from more potent alternatives. To put this extreme waste of money into perspective, it would be terrifically more cost effective to patch the leaky windows of a house with gold leaf rather than install solar cells on its roof. Would we support a government program to insulate people’s homes using gold? Of course not. Anyone could identify the absurd profligacy of such a scheme (in part, because we have not been repeatedly instructed since childhood that it is a virtuous undertaking).

Any number of conventional energy strategies promise higher dividends than solar cell investments. If utilities care to reduce CO2, then for just 10 percent of the Million Solar Roofs Program cost, they could avoid twice the greenhouse gas emissions by simply converting one large coal-burning power plant over to natural gas. If toxicity is a concern, legislators could direct the money toward low-tech solar strategies such as solar water heating, which has a proven track record of success. Or for no net cost at all, we could support strategies to bring our homes and commercial buildings into sync with the sun’s energy rather than working against it. A house with windows, rooflines, and walls designed to soak up or deflect the sun’s energy in a passive way, will continue to do so unassumingly for generations, even centuries. Fragile solar photovoltaic arrays, on the other hand, are sensitive to high temperatures, oblige owners to perform constant maintenance, and require extremely expensive components to keep them going for their comparably short lifespan.

Given the more potent and viable alternatives, it’s difficult to see why federal, state, and local photovoltaic subsidies, including the $3.3 billion solar roof initiative in California, should not be quickly scaled down and eliminated as soon as practicable. It is hard to conceive of a justification for extracting taxes from the working class to fund installations of Stone Age photovoltaic technologies high in the gold-rimmed suburbs of Arizona and California.

The solar establishment will most certainly balk at these observations, quibble about the particulars, and reiterate the benefits of learning by doing and economies of scale. These, however, are tired arguments. Based on experiences in California, Japan, and Europe, we now have solid field data indicating that (1) the benefits of solar cells are insignificant compared to the expense of realizing them, (2) the risks and limitations are substantial, and (3) the solar forecast isn’t as sunny as we’ve been led to believe.

Considering the extreme risks and limitations of today’s solar technologies, the notion that they could create any sort of challenge to the fossil-fuel establishment starts to appear not merely optimistic, but delusional. It’s like believing that new parasail designs could form a challenge to the commercial airline industry. Perhaps the only way we could believe such an outlandish thought is if we are told it over, and over, and over again. In part, this is what has happened. Since we were children, we’ve been promised by educators, parents, environmental groups, journalists, and television reporters that solar photovoltaics will have a meaningful impact on our energy system. The only difference today is that these fairy tales come funded through high-priced political campaigns and the advertising budgets of BP, Shell, Walmart, Whole Foods, and numerous other corporations.

Solar cells shine brightly within the idealism of textbooks and the glossy pages of environmental magazines, but real-world experiences reveal a scattered collection of side effects and limitations that rarely mature into attractive realities. There are many routes to a more durable, just, and prosperous energy system, but the glitzy path carved out by today’s archaic solar cells doesn’t appear to be one of them.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 1:51 am

2. Wind Power’s Flurry of Limitations

Evidence conforms to conceptions just as often as conceptions conform to evidence.

–- Ludwik Fleck, Genesis and Development of a Scientific Fact


By the end of grade school, my mother maintains, I had attempted to deconstruct everything in the house at least once (including a squirrel that fell to its death on the front walk). Somewhere in the fog of my childhood, I shifted from deconstruction to construction, and one of my earliest machinations was a windmill, inspired by a dusty three-foot-diameter turbine blade laying idle in the garage thanks to my father’s job at a fan-and- turbine manufacturer. Fortunately, the turbine’s hub screws fit snugly around a found steel pipe, which formed a relatively solid, if rusty, axle for the contraption. I mounted the axle in wood rather than steel, since my parents had neglected to teach me to weld. There were no bearings, but I dusted the naked holes with powdered graphite for lubrication; I was serious. Lacking the resources to design a tower, a wood picnic table in the backyard proved sufficient.

Some subsequent day, as cool winds ripped leaves from surrounding oak trees and threw them at passersby, I hauled the rickety contraption from the garage to the picnic table, exposed nails and all. I first pulled the wooden mount up onto the table, weighing it down with bricks and other heavy objects. I then inserted the axle-and-turbine assembly. The already rotating blades hovered out over the table’s edge, but there was little time to appreciate my work. Before the lock pin was properly secured, the heavy blade had already begun to spin uncomfortably fast. Only at that moment did it become apparent that I had neglected to install a braking mechanism, but it was too late.

I removed a brick from the base and pressed it against the rotating axle to slow it down, pushing with all my might. The axle hissed as the blades effortlessly accumulated greater speed. I jumped back when the axle’s partially engaged lockpin flew out. The picnic table vibrated as the dull black blades melted into a grayish blur. The steel sails thumped through the air with a quickening rhythm of what in essence had become an upended lawnmower shrieking the song of a helicopter carrying a hundred cats in heat. What happened thereafter can only be deduced, because by the time the howling and clamor came to an abrupt end, my adrenaline-filled legs had already carried me well beyond the far side of the house.

I returned to find an empty picnic table in flames.

Now, if you can imagine a force ten thousand times as strong, you’ll begin to appreciate the power of modern wind turbines, weighing in at 750 tons and with blade sweeps wider than eleven full-size school buses parked end-to-end.1

Like solar cells, wind turbines run on a freely available resource that is exhibiting no signs of depletion. Unlike solar cells, though, wind turbines are economical—just a sixth the cost of photovoltaics, according to an HSBC bank study. Proponents insist that wind power’s costs have reached parity with natural-gas electrical generation. Coal-fired electricity is still less expensive, but if a carbon tax of about thirty dollars per ton is figured into the equation, proponents insist that wind achieves parity with coal as well.2 Either way, wind turbines seem far more pleasant as they sit in fields and simply whirl away.

Image
Illustration 2: An imposing scale Raising a blade assembly at night outside Brunsbüttel, Germany, with a second tower in the background. The turbine sits on 1,700 cubic yards of concrete with forty anchors each driven eighty feet into the earth. (Photo by Jan Oelker, courtesy of Repower Systems AG) 

Today’s wind turbines are specially designed for their task and as a result are far more technologically advanced than even those built a decade ago. New composites enable the spinning arms to reach farther and grab more wind while remaining flexible enough to survive forceful gusts. New turbines are also more reliable. In 2002, about 15 percent of turbines were out of commission at any given time for maintenance or repair; now downtime has dropped below 3 percent. Whereas a coal or nuclear plant mishap could slash output dramatically or even completely, wind farms can still pump out electricity even as individual turbines cycle through maintenance. Similarly, new wind farms start to produce power long before they are complete. A half-finished nuclear plant might be an economic boondoggle, but a half-finished wind farm is merely one that produces half the power. Adding capacity later is as simple as adding more turbines. Farmers who are willing to give up a quarter of an acre to mount a large turbine in their fields can expect to make about ten thousand dollars per year in profit without interrupting cultivation of the surrounding land. That’s not bad considering the same plot seeded with corn would net just three hundred dollars’ worth of bioethanol.3

At first glance, deploying wind turbines on a global scale does not apparently pose much of a challenge, at least not an insurmountable one. It seems that no matter what yardstick we use, wind power is simply the perfect solution.

If only it were that simple.

Wind Power in Sixty Seconds or Less

As our sun heats the earth’s lower atmosphere, pockets of hot air rise and cooler air rushes in to fill the void. This creates wind. For over two thousand years humans harnessed wind for pumping water, grinding grain, and even transatlantic travel. In fact, wind power was once a primary component of the global energy supply. No more. The Industrial Revolution (which could just as easily have been dubbed the Coal Revolution) toppled wind power’s reign. Shipbuilders replaced masts with coal-fired steam engines. Farmers abandoned windmills for pumps that ran on convenient fossil fuels. Eventually, industrialists led the frail wind-power movement to its grave, and gave it a shove. There it would lie, dead and forgotten, for well over a hundred years, until one crisp fall day when something most unexpected occurred.

A hundred years is a short beat in the history of humans but a rather lengthy period in their history of industrialization. And when wind power was eventually exhumed, it found itself in a much-altered world, one that was almost entirely powered by fossil fuels. There were many more humans living at much higher standards of living. A group of them was rather panicked over the actions of an association called the Organization of Arab Petroleum Exporting Countries (OPEC). The scoundrels had decided to turn off their fossil-fuel spigot.

The oil embargo of 1973 marked the resurrection of wind power. Politicians dusted off wind power, dressed it in a green-collared shirt, and shoved it into the limelight as the propitious savior of energy independence. Wind power was worshiped everywhere, but nowhere more than in California. During the great wind rush of the early 1980s, California housed nearly 90 percent of global wind-generation capacity, fueled by tax subsidies and a wealthy dose of sunny optimism.4 And since the windmill industry had vanished long ago, fabricators cobbled together the new turbines much like the one of my youth, with an existing hodgepodge of parts already available from shipbuilders and other industries. Perhaps predictably, when the oil started to flow again, political support for wind energy subsidies waned. Eventually they vanished altogether. But now, with so many humans using so much energy, it wouldn’t be another hundred years before they would call on wind power again.

During the first decade of the twenty-first century, oil prices skyrocketed. But another phenomenon shot up faster: media and political reporting on wind energy.5 For every doubling of oil prices, media coverage of wind power tripled. Capacity grew too—as much as 30 percent annually. But at the end of the decade, an economic crisis smacked wind down again. Wind projects across the planet were cancelled, signaled most prominently by the flapping coat tails of energy tycoon T. Boone Pickens, as he fled from his promise to build massive wind farms in Texas. Financial turmoil further embrittled the fragile balance sheets of turbine manufacturers until orders began to stabilize again around 2011.

By 2012, worldwide wind-power generation capacity had surpassed two hundred gigawatts—many times the capacity of solar photovoltaics but not enough to fulfill even a single percent of global energy demand. We have thrice witnessed the fortunes of wind shifting in the industry’s sail and we may find the future of wind power to be similarly constrained, as its detractors are raring to point out.

The Detractors

A boot tumbling around in a clothes dryer—that’s how residents of Cape Cod describe the wind turbine whining and thumping that keeps them awake at night and gives them headaches during the day. One wind turbine engineering manual confirms that this noise, produced when blades swoop by the tower, can reach one hundred decibels, or about as loud as a car alarm. Multiple turbines can orchestrate an additive effect that is especially maddening to nearby residents. The fact that there is already a condition recognized as “wind turbine syndrome” testifies to the seriousness of their protest. In addition to noise, detractors point to various other grievances. For instance, turbine blades occasionally ice up, dropping or throwing ice at up to two hundred miles per hour. They may also toss a blade or two, creating a danger zone within a radius of half a mile.6 Beyond this zone, residents are relatively safe from harm, and outside a one-mile radius the racket of wind turbines diminishes to the level of a quiet conversation. Ideally energy firms would not build wind turbines near homes and businesses but many of the other prime windy locations are already taken, geologically unstable, inaccessible, or lie within protected lands such as national parks. As a result, desperate wind power developers are already pushing their turbines both closer to communities and out into the sea, a hint as to limitations ahead.

Wind farm opponents tend to arise from one of two groupings, which are not always so easily distinguished from one another. The first are the hundreds of NIMBY (Not in My Backyard) organizations. NIMBY activists live near beautiful pastures, mountain ridges, and other sights they’d prefer to pass on to their children untarnished. They rarely have an economic interest (or anything else to gain) by erecting lines of wind turbines across their landscapes, each taller than the statue of liberty. Can we really blame them for being upset? Generating the power of a single coal plant would require a line of turbines over one hundred miles long. In a New York Times editorial, Robert F. Kennedy Jr. declared,

I wouldn’t build a wind farm in Yosemite Park. Nor would I build one on Nantucket Sound. . . . Hundreds of flashing lights to warn airplanes away from the turbines will steal the stars and nighttime views. The noise of the turbines will be audible onshore. A transformer substation rising 100 feet above the sound would house giant helicopter pads and 40,000 gallons of potentially hazardous oil.7


Kennedy and other politically well-connected residents of the Sound echo concerns voiced around the world. Even in Europe, where residents generally support wind power, locals often squash plans to build the rotating giants. In the Netherlands, local planning departments have denied up to 75 percent of wind project proposals.8

The second group of wind detractors is an unofficial assemblage of coal, nuclear power, and utility companies happy to keep things just as they are. Contrary to public opinion, they aren’t too concerned about wind turbines eroding their market share. They’re far more concerned that legislators will hand over their subsidies to wind-farm developers or institute associated regulations. These mainstay interests occasionally speak through their CEOs or public relations departments but their views more frequently flow to the media via a less transparent route interceded by think tanks and interest groups. The Cato Institute has taken aim at wind power for over a decade, and their criticisms have been published in The National Review, Marketplace, the Washington Times, and USA Today. The Centre for Policy Studies, founded in part by Margaret Thatcher, has done the same. A keen eye can identify these corporate perspectives, which emanate in the form of white papers, newspaper articles, research reports, letters to the editor, and op-eds because they all have one distinct marking in common. They invariably conclude with policy recommendations calling on public and legislative support for our friends in the fossil-fuel and nuclear industries.

NIMBY groups have found a strange bedfellow in these corporate energy giants. Each faction is more than willing to evoke wind power drawbacks that the other develops. Environmentalists sometimes find themselves caught in the mix. For instance, during the 1980s, the Sierra Club rose in opposition to a wind farm proposed for California’s Tejon Pass, citing risks to the California condor, an extinct bird in the wild that biologists were planning to reestablish from a small captive population. A Sierra Club representative quipped that the turbines were “Cuisinarts of the sky,” and the label stuck. Our detractors passionately cite the dangers to birds and bats as giant blades weighing several tons, their tips moving at two hundred miles per hour, spin within flight paths. However, newer turbine models spin more slowly, making them less a threat. Their smooth towers are less appealing for nesting than the latticed towers of earlier designs. According to one study, each turbine kills about 2.3 birds per year, which, even when multiplied by ten thousand turbines, is a relatively small number compared to the four million birds that crash into communication towers annually, or the hundreds of millions killed by house cats and windows every year.9 Even the Sierra Club no longer seems overly concerned, pointing out that progress is being made to protect many bird habitats and that turbine-related death “pales in comparison to the number of birds and other creatures that would be killed by catastrophic global warming.”10 The Sierra Club’s new positive spin on wind turbines is indicative of a shift in focus within the mainstream environmental movement—toward a notion that technologies such as wind turbines will mitigate climate change and related environmental threats posed by coal-fired power plants. Ahead, we’ll consider why this is a frightfully careless assumption to make.

Detractors also cite wind turbines’ less-well-known propensity to chop and distort radio, television, radar, and aviation signals in the same way a fan blade can chop up a voice. The United Kingdom has blocked several proposals for offshore wind farms, citing concerns about electromagnetic interference.11 The 130-turbine Nantucket Sound project (known as the Cape Wind Project) stumbled in 2009 when the Federal Aviation Administration (FAA) claimed the offshore wind farm would interrupt navigation signals. FAA regulators insisted the developer pay $1.5 million to upgrade the radar system at Massachusetts Military Reservation or, if the upgrade could not solve the interference problem, pay $12 million to $15 million to construct an entirely new radar facility elsewhere.12 A large expense to be sure, but not an insurmountable cost for a large wind-farm developer. Other wind-farm risks are not so easily reconciled.

Image
Illustration 3: Road infiltrates a rainforest Roads offer easy access to loggers and poachers. Here a roadway backbone supports emerging ribs of access roads, which are dissolving this rainforest from the inside. (Image courtesy of Jacques Descloitres, modis Land Rapid Response Team, NASA / GSFC)

For instance, if you view satellite images of the Brazilian state of Pará, you’ll see strange brownish formations of barren land that look like gargantuan fish skeletons stretching into the lush rainforest. These are roads. A full 80 percent of deforestation occurs within thirty miles of a road. Many of the planet’s strongest winds rip across forested ridges. In order to transport fifty-ton generator modules and 160-foot blades to these sites, wind developers cut new roads. They also clear strips of land, often stretching over great distances, for power lines and transformers. 13 These provide easy access to poachers as well as loggers, legal and illegal alike. Since deforestation degrades biodiversity, threatens local livelihoods, jeopardizes environmental services, and represents about 20 percent of greenhouse gas emissions, this is no small concern.

Considering Carbon

The presumed carbon benefits of a remote wind farm, if thoughtlessly situated, could be entirely wiped out by the destructive impact of the deforestation surrounding it—a humbling reminder that the technologies we create are only as durable as the contexts we create for them.

Wind proponents are keen to proclaim that their turbines don’t spew carbon dioxide. This is correct, but it is the answer to the wrong question. We’ll consider some more revealing questions soon, but let’s begin with a basic one: turbines may not exhaust CO2 but what about the total carbon footprint of the mining, building, transporting, installing, clearing, maintaining, and decommissioning activities supporting them? Fossil fuels (including, especially, toxic bunker fuels) supply the power behind these operations. The largest and most efficient turbines rest upon massive carbon-intensive concrete bases, which support the hulking towers and (usually) prevent them from toppling in heavy winds. Any thoughtful consideration of the carbon implications of wind turbines should acknowledge these activities.

Nevertheless, carbon footprint calculations can be rather shifty, even silly at times, despite their distinguished columns of numerical support. They hinge on human assumptions and simplifications. They ignore the numerous other harms of energy production, use, and distribution. They say nothing of political, economic, and social contexts. They offer only the most rudimentary place to start.

Former UK leader of Parliament David Cameron installed a wind turbine on his London home, winning him positive reviews from econnoisseurs. However symbolically valuable, it was likely a waste of time, money, and energy according to carbon hawks. That’s because homes, trees, towers, and other structures in cities choke airflow, which too often leaves the turbines unmotivated to spin. A British study claims that a third of small wind turbine locations in the windy coastal city of Portsmouth will never work off the carbon footprint invested to build and install them. A full two-thirds of Manchester’s wind turbines leave their homes with a higher carbon footprint, not a lower one.14

Forceful gusts can whip wind shears up and around buildings, resulting in cracked blades or even catastrophic system failure.15 The unexpected disintegration of a turbine with blades approaching the size and rotational velocity of a helicopter rotor could understandably produce significant damage anywhere, but in a city these harms become especially alarming. A single failure can take down power lines, tear through buildings, and pose obvious risks to residents. In practice, there are so many challenges to installing wind turbines on buildings, such as noise, insurance, and structural issues, that Mike Bergey, founder of a prominent turbine manufacturer, stated he wished people would stop asking him “about mounting turbines on buildings.”16

Lifecycle calculations reveal that wind power technologies actually rely heavily on fossil fuels (which is partly why their costs have dramatically increased over the last decade). In practice, this leaves so-called renewable wind power as a mere fossil-fuel hybrid. This spurs some questions. First, if fossil-fuel and rawmaterial prices pull up turbine costs, to what degree can nations rely on wind power as a hedge against resource scarcity? Moreover, where will the power come from to build the next generation of wind turbines as earlier ones retire from service? Alternative- energy productivists would likely point to the obvious—just use the power from the former generation. But if we will presumably be using all of that output for our appliances, lighting, and driving the kids to school, will there be enough excess capacity left over? Probably not—especially given that the most favorable windy spots, which have been largely exploited, are purportedly satisfying less than 1 percent of global power demands. We’ll likely have to fall back on fossil fuels.

Wind is renewable. Turbines are not.

Nevertheless, if we were to assume that NIMBY objections could be overcome (many could be), that turbines were built large enough to exceed their carbon footprint of production (as they usually are), and that other safety risks and disturbances could be lessened (certainly plausible), is there really anything to prevent wind energy from supplanting the stranglehold that dirty coal plants have on the world’s electricity markets? Wind is a freely available resource around the globe, it doesn’t have to be mined, and we don’t have to pay to have it imported. There is, however, one little issue—one that is causing headaches on a monumental scale—which will lead us closer to understanding the biggest limitation of wind power.

Occasionally, wind has been known to stop.

A Frustratingly Unpredictable Fuel

Imagine if your home’s electrical system were infested by gremlins that would without warning randomly vary your electrical supply—normal power, then half power, then three-quarter power, then off, then on again. Some days you’d be without electricity altogether and on others you’d be overloaded with so much current your appliances would short circuit and perhaps even catch on fire. This is the kind of erratic electrical supply that wind power grid operators deal with on a minute-to- minute basis. Whenever the wind slows, they must fire up expensive and dirty peaker power plants in order to fill the supply gap. Even when the wind is blowing, they often leave the plants on idle, wasting away their fossil fuels so they’re ready when the next lull strikes. To make matters worse, grid operators must perform these feats atop a grid of creaky circuitry that was designed decades ago for a far more stable supply.

Traditional coal, natural gas, nuclear, and hydroelectric power stations provide a steady stream of power that operators throttle to match demand. Conversely, wind and solar electrical output varies dramatically. Windy periods are especially difficult to predict. Even when the wind is blowing more consistently, wind turbines encounter minor gusts and lulls that can greatly affect their minute-to-minute output. Over still periods, wind turbines can actually suck energy off the grid since stalled turbines require electrical power to operate their massive steering systems and other idling functions.17

Image
Figure 3: Fussy wind Wind farm output varies unpredictably. This chart shows the output of a large South Australian wind farm (in megawatts) over seventy-two hours. (Data from Tom Quirk)

Solar radiation is more predictable in frequency but not in intensity, as shown in Figure 4. Even on mostly sunny days, solar photovoltaic output can vary due to dust, haze, heat, and passing clouds.18

Grid operators can handle small solar and wind inputs without much sweat (they manifest as small drops in demand). However, significant unpredictable inputs can endanger the very stability of the grid. Therefore, wind power isn’t well suited to supply base-load power (i.e., the power supplying minimum demands throughout the day and night). If operators relied on wind power as a base-load supply, traffic signals, hospitals, and other essential services would be cut whenever the wind stopped. Even though wind power companies employ teams of meteorologists to predict wind speeds on an hour-to-hour basis, they still rely on coal, natural gas, hydroelectric, and nuclear power for backup consistency.

Image
Figure 4: Five days of sun This plot shows the output (in kilowatts) of a large photovoltaic system in Springerville, Arizona, over five days. Heat, haze, clouds, and other factors affect minute-to-minute solar output unpredictably. (Data from Tucson Electric Power Company)

This intermittency is already causing headaches in the country with the highest number of wind turbines per capita, Denmark. Over five thousand turbines produce the equivalent of about 20 percent of the nation’s electricity demand but not even half of it can be used or stored within the country.19 Since the Danes don’t suddenly start using more electricity whenever it’s windy, the grid verges on excessive supply, and grid operators are forced to dump excess electricity into neighboring Norway, Sweden, and Germany. America’s grids appear even more daunting as many cannot handle more than 2 percent intermittent wind power. Even with a national reinvention of the power network, such as the smart grid projects coming online in Hawaii and California, the most optimistic engineers don’t expect them to handle any more than 30 percent live wind power, even if more turbines could be erected.

In one way, the Danes are fortunate. They can direct some excess wind power to Norway, where large pumps thrust water high into mountain reservoirs to be tapped by hydroelectric power plants when the wind slows.20 This is an effective, yet expensive, strategy for buffering the erratic output of wind turbines. In many of the world’s flat windy plains, this simply isn’t an immediately available option, but turbines can be wired to mountainous locations for about $3 million per mile. Nevertheless, accommodating pumped storage on a large scale would require many more hydropower facilities, which bring their own set of disadvantages, as we will discuss later. Alternately, wind turbines can pressurize air into hermetically sealed underground caverns to be tapped later for power, but the conversion is inefficient and suitable geological sites are rare and often far away from electricity users. Finally, wind energy can be stored in batteries, flywheels, or as hydrogen gas, but these strategies are mind-numbingly pricey, as we shall explore later. Despite all the hype surrounding energy storage, experts debate whether these options could ever become effective large-scale solutions within the next thirty to fifty years, let alone in the more immediate future.

Policymakers, journalists, and wind proponents alike regularly misunderstand or misrepresent these windy realities. Proponents frequently declare that wind power costs the same as natural gas or just a bit more than coal, but this is misleading. Alternative- energy firms aren’t required to back up their temperamental products, which makes them seem less pricey than they are in practice. It’s during the power conditioning steps that the total costs of wind power start to multiply. The inconsistency of wind power necessitates a dual system, the construction and maintenance of one power supply network for when the wind is blowing and a second network for when it isn’t—an incredibly expensive luxury.

Where the Wind Blows

We don’t always get wind power when we want it, and we less often get it where we want it. In the United States, the strongest winds are all offshore. The strongest terrestrial gusts blow within a band stretching from the northern edge of Texas up through the Dakotas—right where almost nobody lives. Getting the wind crop to cities will be both technically knotty and expensive. As the director of North Dakota’s Energy and Environmental Resource Center quips, “We produce the crop but we can’t get it to the grain elevator.”21 Grid developers will also bump into right-of-way challenges since most residents disapprove of power lines as much as they do of wind turbines. The Sierra Club is actively challenging grid expansion through national forests, noting that the coal industry is ready to pounce on green grids.

Americans cannot count on a comprehensive smart grid any time soon, but the projected cost falls within the bounds of reason and an upgraded grid would bring numerous benefits. Most notably, a comprehensive smart grid would flip the long-held operating rule of power supply. Instead of utilities adjusting their output to meet demand, a smart grid would allow homes and businesses to adjust their electrical use automatically, based on the availability of power. That’s because a smart grid coordinates electrical sensors and meters with basic information technology and a communications network akin to the Internet that can transform dumb power lines into a nimble and responsive transmission system. When a wind gust blows, tens of thousands of refrigerators will power up to absorb the added capacity and when the wind lulls, they will immediately shut down again. Of course, not every household and industrial appliance lends itself to be so flexibly controlled—a respirator at a hospital, for example—but a smarter grid will nevertheless minimize the need for expensive peaker power plants and spinning reserve (i.e., idling power plants). Given incentives, consumers could trim peak electricity consumption by 15 percent or more, saving hundreds of billions of dollars in the process.22

Smart grids are less vulnerable to power leaks and electricity pilfering—two big holes in the existing national grid. Furthermore, smart grids are less likely to experience power outages, which cost Americans about $150 billion every year and require dirty diesel backup generators to fill gaps in service.23 By simply plugging leaks and avoiding needless inefficiencies, a nationwide smart grid would save a stream of power equivalent to the raw output of thirty-five thousand large wind turbines. The energy conservation savings that smart grids enable would be greater yet—probably many times greater. And unlike the wildly optimistic conjectures propping up alternative energy policies, smart grid estimations are quite sound; numerous other countries have already rolled out similar upgrades with great success. Sweden, for instance, installed smart meters across the nation quite some time ago.

There is much work to be done if the United States is ever to make similar strides. Regulators will have to coordinate standards and negotiate how the costs and benefits will be shared between the nation’s three hundred utilities, five hundred transmission owners, and hundreds of millions of customers. Additionally, a connected smart grid will require a different form of security than comparatively dumb grids of today. Unfortunately, these responsible tasks are all too easy to cast aside when the magical lure of solar cells and wind turbines woos so insistently upon the imaginations of politicians, environmentalists, and the media.

Capacity Versus Production

Do you know the maximum speed of your car? It is safe to venture that most divers don’t, save for perhaps German autobahners, since they rarely if ever reach maximum speed. The same holds for power plants—they can go faster than they do. A plant’s maximum output is termed “nameplate capacity,” while the actual output over time is called “production.” The difference is simple, yet these two measures are confused, conflated, and interchanged by journalists, politicians, and even experts.

A “capacity factor” indicates what percentage of the nameplate maximum capacity a power plant actually produces over time. In traditional plants, operators control production with a throttle. A small one-hundred-megawatt coal plant will only produce 74 percent of that amount on average, or seventy-four megawatts.24 For wind and solar, as we have already seen, the throttle is monitored by Mother Nature’s little gremlins. A large wind farm with a nameplate capacity of one hundred megawatts will produce just twenty-four megawatts on average since the wind blows at varying strengths and sometimes not at all.25 Every generation mechanism is therefore like a bag of potato chips— only partially full—as shown in Figure 5.

In order to match the production of a large 1,000-megawatt coal-fired power plant with a wind farm, 1,000 megawatts of wind turbines won’t be enough. For an even swap, we’d need more than three times the wind capacity, about 3,100 megawatts. Both a 1,000-megawatt coal plant producing on average at 74 percent of capacity and a 3,100-megawatt wind farm producing on average at 24 percent of capacity will yield about the same output over time. Of course, this hypothetical comparison is still inadequate for real-world comparisons given the inconsistency of wind power. Therefore, energy analysts use a reliability factor to measure the minimum percentage of wind power that turbines can deliver 90 percent of the time. Taking this into account, we would need up to 18,000 megawatts of wind power to offset a 1,000-megawatt fossil-fuel or nuclear plant 90 percent of the time.26 As Leigh Glover, a policy fellow at the Center for Energy and Environmental Policy at the University of Delaware, sums up, “When basic calculations are completed for the number of wind turbines or pv arrays needed to replace the world’s coal-fired power stations, the resulting scenarios verge on nothing less than bizarre.”27

Image
Figure 5: U.S. capacity factors by source A capacity factor is the percentage of the nameplate maximum capacity that a power plant actually produces over time. Fossil fuel, hydro, and nuclear plants attain nearly 100 percent of maximum capacity when fully throttled, but lulls in demand and cost differentials leave them producing less. Natural gas is more expensive than coal, so power companies turn off gas plants first when demand drops. Weather variables dictate wind and photovoltaic capacity factors. (Data from U.S. Department of Energy)

In fact, the rise of wind power in the United States has sadly not shuttered a single coal-powered plant.28 So why might we think building more turbines will magically serve us any better? Well, it’s likely because the story lines surrounding wind power are so compelling. And it just so happens that part of that magic was manufactured.

Manufacturing the Magic

When President Obama premiered his clean energy initiative in Newton, Iowa, he cited a prominent U.S. Department of Energy (doe) report showing that the nation could easily obtain 20 percent of its electricity from wind turbines by 2030—he may have been completely unaware that the report’s key dataset wasn’t from the DOE at all. In fact, if genuine DOE cost and performance figures had been used, the report’s authors would likely have come to the opposite conclusion—20 percent wind by 2030 will be logistically complex, enormously expensive, and perhaps ultimately unachievable.

Much of the enthusiasm surrounding wind power in recent years has grown out of this prominent Bush-era report entitled 20% Wind Energy by 2030, which concludes that filling 20 percent of the nation’s grid with wind power is achievable and will come at a cost described as “modest.” The authoritative DOE report has been held up as a model for charting a course for wind energy funding; it has been covered by media sources across the globe, presented to congressional leaders, evoked by two presidents, and supported by the Sierra Club, the Worldwatch Institute, the Natural Resources Defense Council, and dozens of other organizations.29 In fact, during my investigative research on the study, I didn’t come across a single critical review of its findings. It is therefore particularly intriguing to note that the report is based on key assumptions, hidden within a second appendix, which are so explicitly incongruent with bona fide doe data that many people might have considered them to be outright fraudulent had they not been produced within the protective halo surrounding alternative-energy research. This doe report, which probably seemed ecologically progressive to its unwitting list of environmentalist cosponsors, may ultimately prove a tremendous disservice to their cause.

The report’s most remarkable conclusion is simple. Filling 20 percent of the grid with wind power over the next twenty years will cost just 2 percent more than a scenario without wind power.30 The conclusion teeters atop a conspicuous pile of cost and performance figures developed by industry consultants, despite the fact that the DOE already spends millions of dollars tabulating the same sorts of data on a routine basis. The report cites four “major” contributors outside the Department of Energy: a trade organization called American Wind Energy Association AWEA and three consulting firms—Black and Veatch, Energetics Incorporated, and Renewable Energy Consulting Services. Would perhaps any one of these groups have something to gain from painting an optimistic rendering of wind’s future? It turns out they all do. And that potential gain can be measured in billions.

When the report was written, the AWEA's board of directors included executives from General Electric, JP Morgan, Shell, John Deere, and a handful of wind power companies including T. Boone Pickens’s company Mesa Power. As an industry group, the AWEA was interested in orchestrating a positive spin on anything wind. The AWEA salivated in anticipation of preparing a pro-wind report enshrouded by the credibility of the Department of Energy.

But, there was a problem.

The doe’s field data on wind turbine performance was too grim—too realistic—for a report destined to pump up the future of wind power. Far more favorable statistics would be required. And the consultant employed to produce the stand-in datasets would not disappoint.

The authors retained Black and Veatch—a consultancy that designs both wind farms and natural-gas generation plants—to develop cost projections as well as key capacity factors for the analysis.31 Remember, a capacity factor is simply the percentage of a wind turbine’s nameplate capacity that is actually produced under real-world conditions—the difference of a percent or two can make or break a wind farm. According to doe data, when countries or regions start to install wind turbines, the average capacity factor goes up at first, then levels off or declines as additional turbines are sited in less-ideal locations.32 For instance, between 1985 and 2001, the average capacity factor in California rose impressively from 13 percent to 24 percent, but has since retreated to around 22 percent. Over recent years, Europe’s maturing wind farms have stabilized below 21 percent.33 The U.S. average is under 26 percent, according to field readings from the doe. That’s why Black and Veatch’s capacity- factor assumptions, starting at 35 percent to 52 percent in 2010, and continuing to increase 15 percent by 2030, are particularly shocking.

Black and Veatch’s average capacity-factor estimations rank among the highest ever published anywhere, let alone in a formal government report. If Black and Veatch knows how to run the nation’s turbines at such high capacity, then they know something that nobody else does. Even the pro-wind AWEA caps realistic capacity factors at a terribly optimistic 40 percent—so, incidentally, does the Department of Energy.34 In fact, Black and Veatch’s expectation that capacity factors for wind turbines will increase over the next twenty years conflicts with other DOE reports, which forecast turbulence as future wind farms are forced into subprime locations.

The knowledgeable public servants at the DOE might have laughed Black and Veatch out of Washington. But they didn’t. They got them published.

The justifications for employing such extraordinary assumptions are not entirely clear. During my investigation, a DOE official assured me that the Black and Veatch figures “were extensively critiqued and adjusted by experts in the wind and general energy communities.” Though when I asked a director at Black and Veatch why their figures differed so dramatically from doe assumptions, he was rather tight-lipped, insisting only that they stood by the methodology as outlined in the report.35 That’s particularly disconcerting.

The report’s methodology section states simply, “Black and Veatch used historical capacity factor data to create a logarithmic best-fit line, which is then applied to each wind power class to project future performance improvements.” It seems the consultancy assumed that the wind turbine learning curve (i.e., the idea that past experience with a technology helps to improve the technology and reduce its costs) would continue to produce gains well into the future. While it is well accepted that this occurred through the 1980s and 1990s, the learning curve has since flattened, as the DOE has documented. Therefore, extrapolating a select few years of data into the future without acknowledging the industry’s maturation is as problematic as extrapolating the growth of high school students to show that by college they will stand taller than giraffes.

In addition to the optimistic capacity-factor projections, the report’s analysis includes mysterious historical data. Black and Veatch “estimated” capacity factors ranging from 32 percent to 47 percent in 2005.36 The report fails to mention that DOE fieldwork from that year placed the actual nationwide capacity factor closer to 20 percent.37 (When I asked Black and Veatch about the discrepancy, they offered no further comment.) These discrepancies aren’t the only surprises lurking in the report’s appendices.

Black and Veatch assumed that the costs for building, installing, and maintaining future wind turbines will not increase, as other DOE reports predict, but will actually decrease, due to what it black-boxes as “technology development.” But since today’s turbine designs are already close to their theoretical maximum efficiency, the future success of wind power may be less influenced by technological development than by social and environmental variables. Many of the windiest sites present high barriers to entry. Since turbines must be spaced at least five rotor-diameters apart side-to-side and at least ten rotor diameters front-to-back in order to avoid a wind “shading” effect, vast stretches of land rights must be secured in order to create even a modestly scaled wind farm. Offshore sites are easier to procure and have strong, consistent winds, but they are expensive to develop, connect, and maintain for obvious reasons— inaccessibility, deep sea beds, high waves, corrosive salt water, hurricanes, and so on. The Department of Energy expects that suboptimal environments—with greater wind turbulence, wind variability, and unfavorable site factors such as steep slopes, terrain roughness, and reduced accessibility—will push up the cost of most of the remaining wind farm sites by some 200 percent.38

When Black and Veatch’s capacity-factor assumptions are compounded by their cost assumptions, readers are left with an impression of wind power that is up to six times more impressive than if the analysis were run using the doe’s own figures.39 This raises the question, Why did the Department of Energy base its pivotal wind energy report on numbers conjured up by an engineering firm, with a vested interest in advancing energy production interests, rather than its own data? This is the question I posed to the DOE.

Their response was telling. They made it apparent that even though the report claims to contain “influential scientific information,” its analyses might not be recognized as such by the greater scientific community.40 One of the report’s lead editors told me, “The 20% Wind work was carried out to develop a picture of a future in which 20 percent of the nation’s electricity is provided from the wind, and to assess the feasibility of that picture. The work was based on the assumption that reasonable orderly advancement of the technology would continue, and that key issues needing resolution would be addressed and favorably resolved. Hence the work used input information and assumptions that were forward-looking rather than constrained by recent history.”41

Indeed, the authors did not allow recent history to stand in their way. In fact, some might argue that their answer echoes the rhetoric used to defend the fabrication of data for which no historical justification or cultural context exists. Energy players employed such lines of reasoning to suggest that by the 1960s, nuclear energy would produce abundant clean energy for all, that by the 1970s, fusion power would be too cheap to meter, and that solar cells would be fueling the world’s economies by 1986.42 With the advantage of hindsight, historians of science romp in the particulars of how such declarations rose to prominence. They show how genuine inquiry was often pushed aside to make room for the interests of industrial elites in their attempts to pry open taxpayer coffers for subsidies. Will future historians judge the 20% Wind Energy by 2030 report similarly?

Yes, reasons Nicolas Boccard, author of two academic papers recently published in Energy Policy.43 In his opinion, the kind of tomfoolery going on at the DOE is nothing particularly shocking. Boccard, who studies the phenomenon of capacity-factor exaggerations in Europe, found that when solid data do not exist, wind proponents are all too willing to make “unsubstantiated guesses.” They get away with it because the public, politicians, journalists, and even many energy experts don’t understand how capacity factors are involved in influencing prospects for wind power development. Or, perhaps caught up in the excitement surrounding wind energy, proponents may simply not care, due to a psychological phenomenon called selection bias, whereby people tend to overvalue information that reinforces their ideology and undervalue that which contradicts it. Boccard insists, “We cannot fail to observe that academic outlets geared at renewable energy sources naturally attract the authors themselves supportive of renewable energy sources, as their writing style clearly indicates. As a consequence, this community has (unconsciously) turned a blind eye to the capacity factor issue.” He compared wind farm data across many European countries, where wind power penetration is many times higher than in the United States. He uncovered a worrisome gap between the anticipated and realized output of wind turbines. In fact, Boccard maintains, the difference was so large that wind power ended up being on average 67 percent more expensive and 40 percent less effective than researchers had predicted. As a rule of thumb, he maintains that any country-level assumptions of capacity factors exceeding 30 percent should be regarded as “mere leaps of faith.”44

It might seem counterproductive for wind firms to risk overinflating expectations, but only if we assume that real-life turbine performance will impact their profit potential. It won’t. Consulting firms such as Black and Veatch stand to lock in profits during the study and design phase, long before the turbines are even brought online. The AWEA manufacturers stand to gain from the sale of wind turbines, regardless of the side effects they produce or the limitations they encounter during operation. And by placing bets on both sides of the line, with both wind turbines and natural gas, Pickens was positioned to gain regardless of the wind’s motivations. If the turbines don’t return on the promise, it’s no big deal for those in the money. The real trick is convincing the government, and ultimately taxpayers, into flipping for as much of the bill as possible. And one of the best tools for achieving that objective? A report that can be summarized in a sound bite struts with an air of authority, and can glide off the president’s tongue with ease. 20% Wind Energy by 2030.

It may be tempting to characterize this whole charade as some sort of cover-up. But the Department of Energy officials I interviewed were certainly open (if nervous) to my questions; anyone with an Internet connection can access the report and its suspect methodologies; and the DOE regularly publishes its field measurements in a report called the Annual Energy Outlook. There’s no secret. Energy corporations develop “forward-looking” datasets favorable to their cause, government employees slide those datasets into formal reports, the Department of Energy stamps its seal on the reports, and the Government Printing Office publishes them. Then legislators hold up the reports to argue for legislation, the legislation guides the money, and the money gets translated into actions—usually actions with productivist leanings. It isn’t a cover-up. It’s standard operating procedure. This may be good or bad, depending on your political persuasion. This well-oiled system has operated for years, with all actors performing their assigned duties. As a result, Americans enjoy access to ample and inexpensive energy services and we have a high standard of living to show for it. But this process nevertheless leads to a certain type of policy development— one that is intrinsically predisposed to favor energy production over energy reduction. As we shall see, this sort of policy bent—while magnificently efficient at creating wealth for those involved—does not so clearly lead to long-term wellbeing for everyone else.

Step Away from the Pom-Poms

When Big Oil leverages questionable science to their benefit, environmentalists fight back en masse. As they should. But when it comes to the mesmerizing power of wind, they acquiesce. No op-eds. No investigative reports. No magazine covers.

Nothing.

If environmentalists suspected anything funny about the 20% Wind Energy by 2030 report, they didn’t say anything about it in public. Instead, fifty environmental groups and research institutes, including the Natural Resources Defense Council, Sierra Club, and Lawrence Berkeley National Laboratory opted to double-down their windy bets by formally backing the study. When the nation’s smartest and most dedicated research scientists, physicists, and environmentalists roll over to look up googly-eyed at any corporate energy production report, it’s worthy of our attention. This love affair, however, is harmful to the environmentalists’ cause for a number of reasons.

First, fetishizing overly optimistic expectations for wind power takes attention away from another grave concern of environmental groups—reducing dirty coal use. Even if the United States could attain 20 percent wind energy by 2030, the achievement alone might not remove a single fossil-fuel plant from the grid. There is a common misconception that building additional alternative- energy capacity will displace fossil-fuel use; however, over past years, this hasn’t been the case. Producing more energy simply increases supply, lowers cost, and stimulates additional energy consumption. Incidentally, some analysts argue that the mass deployment of wind turbines in Europe has not decreased the region’s carbon footprint by even a single gram. They point to Spain, which prided itself on being a solar and wind power leader over the last two decades only to see its greenhouse gas emissions rise 40 percent over the same period.

Second, the pomp and circumstance around wind diverts attention from competing solutions that possess promising social and ecological value. In a cash-strapped economy, we have to consider the trade-offs. As journalist Anselm Waldermann points out, “when it comes to climate change, investments in wind and solar energy are not very efficient. Preventing one ton of CO2 emissions requires a relatively large amount of money. Other measures, especially building renovations, cost much less—and have the same effect.”45

The third problem is the problem with all myths. When they don’t come true, people grow cynical. Inflated projections today endanger the very legitimacy of the environmental movement tomorrow.

Every energy-production technology carries its own yoke of drawbacks and limitations. However, the allure of a magical silver bullet can bring harms one step closer. Illusory diversions act to prop up and stabilize a system of extreme energy consumption and waste. Hype surrounding wind energy might even shield the fossil-fuel establishment—if clean and abundant energy is just over the horizon, then there is less motivation to clean up existing energy production or use energy more wisely. It doesn’t help when the government maintains two ledgers of incompatible expectations. One set, based on fieldwork and historical trends, is used internally by people in the know. The second set, crafted from industry speculation and “unconstrained” by history, is disseminated via press releases, websites, and even by the president himself to an unwitting public.

It may be time for mainstream environmental organizations to take note of this incongruence, put away the clean energy pom-poms, and get back to work speaking up for global ecosystems, which are hurt, not helped, by additional energy production. Because as we shall see, the United States doesn’t have an energy crisis. It has a consumption crisis. Flashy diversions created through the disingenuous grandstanding of alternative-energy mechanisms act to obscure this simple reality.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 5:32 am

3. Biofuels and the Politics of Big Corn

Years ago, fairy tales all began with “Once upon a time . . .” Now we know they all begin with, “If I am elected.”

–- Carolyn Warner


Iowa. That’s the answer to a question that growing numbers of scientists, aid workers, reporters, and environmentalists are asking about ethanol and other biofuels. But before we can address the question, it would be helpful to understand what biofuels are and how they are affecting our energy infrastructure.

Biofuels in Sixty Seconds or Less

Like photovoltaics and wind turbines, biofuels are another way to harness power from the sun, but through photosynthesis. Unlike wind turbines and solar photovoltaics, biofuels are easily stored and dispatched as needed, much like oil, coal, and natural gas, making their energy far more valuable.

Before the industrial revolution, biomass materials (i.e., living and recently dead plant material, such as firewood, and biological material, such as dung) were humanity’s primary sources of energy. 1 The world’s first mass-produced flex-fuel vehicle, Ford’s Model T, ran on ethanol. And even up through World War II, the United States Army distilled ethanol to supply fuel for combat vehicles. Nevertheless, after the war an abundance of low-cost petroleum washed America’s biofuel industries down the drain.2 For a time.

We’ll dredge up the politics behind biofuel’s reemergence in a moment. But first, let’s consider the chief biofuels available today:

• Solid biomass such as wood, sawdust, agricultural waste, manure, and other products are burned directly, formed into pellets, and converted into charcoal.

• Biogases such as methane are produced from organic materials in anaerobic digesters or captured as they naturally emit from animal, agricultural, and landfill waste.

• Bioalcohol, most commonly ethanol, is distilled from starchy plants such as corn, sugar beets, and sugar cane.

• Biodiesel is chemically manufactured from oil-rich plant and animal feedstocks such as animal fats, rapeseed oil, palm oil, and algae.

Though the various biofuel techniques vary in style and complexity, the basic idea is the same: refiners convert plant and animal materials into usable energy products. In the United States today, biomass products serve about 5 percent of primary energy demand.3

Biofuel critics point out that the industry produces airborne heavy metals, copious amounts of wastewater, and a variety of other externalized environmental costs. As evidence, they point to Brazil, where ecologists declared many rivers and waterways biologically dead as early as the 1980s due to biofuel effluents (ethanol represents roughly a third of Brazil’s automotive fuel).4

Perhaps the most cited drawback, however, is the risk that biofuels can spark land competition between food and fuel, inducing an upward pressure on global food prices. As biofuels become more valuable, farmers may opt to grow fuel crops instead of food crops on their existing fields or even level forests in order to expand croplands. High food prices do not significantly affect rich consumers because they spend just a small portion of their income on food. Not so for the world’s poor. For years, researchers warned that expanding biofuels production would jeopardize food security worldwide. Eventually, they proved to be right.

Turning Food into Fuel

In 2008 riots ensued throughout the world in response to a dramatic increase in corn prices. The White House blamed the increase on rising food demand from fast-growing China and India. 5 Others disagreed. World Bank president Robert Zoellick acknowledged that by early 2008 it was evident that biofuel demand had become a “significant contributor” to grain price escalations, which put thirty-three countries at risk for social upheaval. 6 Washington was dismayed, maintaining that biofuel demand was responsible for less than 3 percent of the price increase— bad news for Zoellick, as the United States was the World Bank’s major donor. Zoellick immediately backpedaled by sequestering a confidential report that the World Bank had painstakingly prepared to research the price shock. But the report did not remain secret for long. An informant leaked it to the Guardian that summer.7 The report’s authors concluded that biofuel demand was actually responsible for a hefty 75 percent of the food price jump.

To some, converting arable fields over to fuel crops was especially troubling given that much of the resulting biofuel would eventually burn away in inefficient vehicles driving through inefficient transport systems. The head of the International Food Policy Research Institute, Joachim von Braun, announced that world agriculture had “entered a new, unsustainable and politically risky period.”8 Around the same time, researchers at the Carnegie Institution and Lawrence Livermore National Laboratory published a paper claiming that even if nations were to divert the entire global harvest of corn to ethanol production it would satisfy just 6 percent of global gasoline and diesel demand. They observed that “even in the best-case scenario, making ethanol from corn grain is not an effective route for lowering the carbon intensity of the energy system . . . ethanol from corn is basically a way to make cars run on coal and natural gas.”9 Why coal and natural gas? We shall come to that soon.

In 2009 the National Academy of Sciences released a study detailing how the combined health costs, pollution, and climate change impacts from producing and burning corn ethanol were worse than simply burning gasoline, perhaps almost twice as bad. A prominent professor from Iowa State University’s Agriculture and Biosystems Engineering Department published a biting attack on ethanol claiming that “while feedstock can be grown annually, ethanol is not renewable. Ethanol production is entirely dependent on nonrenewable (petro-) energy in order to get any energy out. The term ‘renewable’ is grossly overused by those promoting ethanol and other biofuels, indeed the promotions sound like a call for a perpetual energy machine.”10

The criticisms didn’t stop.

Academics and government agencies released a flurry of scientific research investigating ethanol. By 2011, when food prices spiked again and Congress let an ethanol tax credit expire, it was difficult to find any informed individual that didn’t have some sort of opinion on the fuel. Critics continued to deride it for polluting water, consuming fossil fuels, spewing greenhouse gasses, endangering biodiversity, spreading deforestation, and of course destabilizing food supplies. If concerned citizens were disagreeing on the reason, they weren’t disagreeing on the consensus: corn ethanol was a flop.

So the question arises—Why did Americans ever think it was a good idea to turn food into fuel in the first place? The answer is, of course, Iowa.

Big Corn

In 2008 the United States found itself in an election year, as it so frequently does, and as a matter of habit, turned to Iowa, the geographic heart of the nation, to sound off on that year’s primary candidates. Perhaps the greatest fear of presidential and congressional candidates alike is being labeled “antifarmer” in the heat of the Iowa spotlight. Politically, this fear is justified. Americans are mesmerized by the romantic ideal of farmers who wear bib overalls, drive red tractors, and cultivate their own destinies— even though the real control of farming today lies firmly in the manicured hands of businesspeople who wear suits and drive Porsches. Nevertheless, this pastoral imagery guarantees there is always plenty of election angst in the air to scare up a veto-proof majority of Congress that will pass agri-anything. In that election cycle, it was a farm bill that handed out subsidies to big agribusiness and wealthy individuals—a list that included people such as David Letterman and David Rockefeller for their “farming” activities.11 By the same token, political candidates were leery of voicing anything but praise for the then well-recognized dirty, wasteful, and risky practice of distilling corn ethanol.

Amidst the drive to realize economies of scale, most farms in wealthier nations no longer resemble the kind Old McDonald had, with an assortment of animals, vegetables, grains, and fruits. Farms once worked as miniature ecosystems where animal wastes fertilized plants, plants produced food, and animals ate the leftovers. By some measures, this system was inefficient— it no doubt required significant human labor. In contrast, modern farms utilize highly mechanized systems to produce just one or two high-yield products. The contemporary farming system realizes far greater harvests, feeding more people with less land, but it is not without its own set of costs and risks.

After World War II, farmers invested in plant breeding programs and agricultural chemicals in order to increase yields. Over subsequent decades, they grew increasingly dependent on technological advances, intensifying their farming practices with outside capital along the way. Smaller farms consolidated into larger and larger farms and by the beginning of this century, the bulk of farming income in the United States came from the top few percent of the nation’s farming firms. These superfarms wield a historically unprecedented degree of influence on agriculture, from purchasing, sales, and distribution to the investment and control of resources. While these select few corporations eagerly showcase the promise of their technologies to feed the world’s poor, critics maintain that the bulk of these research efforts accrue toward increasing short-term profits—not toward addressing hunger-related or longer-term concerns of low soil fertility, soil salinity, and soil alkalinity. Critics also condemn agribusiness for downplaying the risks that their models of agriculture produce—risks stemming from superpests, declining crop diversity, and national overreliance on a few large transnational firms for food production. Furthermore, they maintain that centralized agriculture is responsible for the proliferation of dead zones, has bred distribution-related externalities, and has led to the demise of traditional rural communities.12

In the early 1970s, one of these large corporations, Archer Daniels Midland, had a solution that was looking for a problem. It wanted to market the byproducts of its high-fructose corn syrup, a product that was growing in popularity and would eventually come to dominate the sweetener market. The firm’s politically savvy and well-connected president, Dwayne Andreas, knew that one byproduct, ethanol, could power automobiles. He launched an intensive lobbying and “educational” program to promote corn ethanol as a fuel source, an effort auspiciously timed with the 1973 oil embargo. Archer Daniels Midland’s lobbying team convinced key senator Bob Dole (R-Kansas), as well as President Jimmy Carter, that corn ethanol production could circumvent the need for oil imports. With additional prodding from the corn and farm lobbies, Congress eventually passed the Energy Tax Act in 1978, which offered tax breaks for gasoline products blended with 10 percent ethanol. In 1980 Congress wove additional ethanol incentives into legislation—then again in 1982— and again in 1984.13 For Archer Daniels Midland it was a windfall, though every dollar of ethanol profit was costing taxpayers thirty dollars, according to a critical report from the conservative Cato Institute.14

Ethanol proponents received another thrust. Across the country, states increasingly chose the alcohol to replace the toxic gasoline oxygenate MTBE.15 The quick switch launched ethanol prices higher, fueling both facility upgrades and new plant construction. In 2004 the American Jobs Creation Act provided a $0.51-per-gallon ethanol subsidy to oil companies that blended ethanol with their gasoline and instituted a protective tariff of $0.54 per gallon to ward off Brazilian ethanol imports. Ultimately, Congress required gasoline blenders to incorporate at least 4 billion gallons per year of ethanol with gasoline in 2006, 6.1 billion in 2009, 15 billion by 2015, and 36 billion by 2022.16 Politicians assured agribusiness firms that they would enjoy predictable ethanol demand well into the future. But for Big Corn, being handed a guaranteed ethanol empire wasn’t enough. They wanted taxpayers to pay for it.

It might have seemed a daunting task to both harness public support and motivate Congress to cede billions of public funds for one of the largest subsidies of big business (and by proxy, car culture) ever attempted. But Big Corn handled it deftly. Ethanol producers dispatched teams of lobbyists to federal, state, and local government chambers to ensure that legislators would subsidize the industry at every stage of development. They packaged their handout requests under various guises of helping farmers, increasing energy independence, protecting the environment, and keeping energy jobs at home, even though there was little real evidence to show that industry subsidies would serve any of these concerns. The rebranding was a success, prompting numerous legislative actions:

• Loan guarantees: The U.S. Department of Agriculture guaranteed biofuel loans and spent eighty million dollars on a bioenergy development program.

• State tax breaks: Individual states instituted retailer incentives, tax incentives, discounts for ethanol vehicles, and fuel tax reductions for ethanol, totaling several hundred million dollars per year.

• Federal tax breaks: The Internal Revenue Service reclassified biofuel facilities and their waste products into more favorable asset classes, allowing for billions of dollars in savings for the industry.

• Research funds: The U.S. Department of Energy released hundreds of millions of dollars to fund research and development and demonstration plants.

• Labor subsidies: State tax laws as well as the federal Domestic Activities Deduction introduced income tax reductions for workers in the biofuel industry, totaling about forty to sixty million dollars per year.

• Farm subsidies: U.S. farm policies long provided direct subsidies for numerous crops; billions of dollars of these funds accrued to crops for biofuel production.

• Water subsidies: County and state subsidies for water greatly benefited ethanol producers since every gallon of ethanol required hundreds of gallons of water to grow crops and process the fuel.17

And the total bill? The subsidies between 2006 and 2012 equated to about $1.55 per gallon-gasoline-equivalent of ethanol. For comparison, that’s over one hundred times the subsidies allotted for a gallon of petroleum gasoline. Nevertheless, subsidies were only part of Big Corn’s benefit package. There was more.

Possibly the most egregious capitulation came in 2007 when the Environmental Protection Agency reclassified ethanol fuel plants, allowing them to significantly increase their federally controlled emissions. Government regulators also released producers from accounting for their fugitive emissions (pollutants not from the plant stack itself ) and no longer required them to adopt the best available control technologies. Under these lax standards, biofuel refineries shifted away from using natural gas to power their energy-intensive distillation operations and instead began using cheaper, dirtier coal.18 Additionally, regulators lowered corporate average fuel economy (cafe) requirements for car companies that produced flex-fuel vehicles, which run on both ethanol and gasoline, even though customers opted to fill their flex-fuel tanks with standard gasoline over 99 percent of the time. This gaping loophole reduced the efficiency of the U.S. vehicle fleet across the board, increasing oil imports by an estimated seventy-eight thousand barrels per day.19

Energizing Iowa

Dr. Dennis Keeney has a front-row seat to the nation’s electoral primary action, being a professor at Iowa State University. He maintains that Iowa’s early primary position places the state’s electorate in an especially influential role. “Any politician, be it dogcatcher or presidential candidate, speaking against ethanol in Corn Belt states has been doomed to denigrating letters, jeers from peers, and political obscurity,” remarks Keeney, who asserts, “had ethanol expansion been subject to environmental assessment guidelines and or life cycle analyses, the ethanol support policies, in my opinion, would never have been adopted.”20 But if public perception of ethanol was high, politicians knew their poll numbers would be too, as long as they supported the fuel. To speak reason about corn ethanol would be to spit in the face of the Iowa economy, certain to prompt unwanted head shaking by voters come primary season.

In the run-up to the 2008 election, Archer Daniels Midland, along with grain handlers, processors, and other corporate farming interests, formed cover for their operation and the politicians that supported it by forming an “educational” group called the Renewable Fuels Association. This corporate-funded research and lobbying group effectively silenced numerous economic, scientific, environmental, and social critiques of corn ethanol as they arose.21

In fact, up until the food riots of 2008 there was little well-developed resistance to the idea of a corn ethanol economy. It seemed everyone had something to gain. To start, security hawks saw corn ethanol as a step toward energy independence. And at that time, most mainstream environmental groups were still vibrating with excitement that Big Oil might be overpowered by a countryside brimming with yellow kernels of clean fuel. Automakers were similarly pleased; converting cars to run on ethanol was much easier than building electric or fuel-cell vehicles (the best greenwashing alternatives) and cheaper too—at only about one hundred dollars extra per vehicle.22 Meanwhile, fossil-fuel firms likely found little reason to fret; every step of the ethanol scheme required their products—natural gas forms the basis for requisite fertilizers, oil fuels the tractors and shipping infrastructure for biofuels, and coal heats the high-temperature distillation and cracking processes.

It takes power to make power.

It’s well known that crop yields grew dramatically during the green revolution—indeed from 1910 to 1983 corn production per acre in the United States grew 346 percent—but the corresponding caveat is rarely mentioned. The raw energy employed to achieve those gains grew at more than twice the rate—810 percent during the same period. One of the reasons it takes so much energy to create biofuels in America is that roads and parking lots entomb the richest soil. Early settlers built cities along rivers and near deltas, precisely where eons of annual flooding had deposited layers of nutrient laden silt. Suburban expansion subsequently pushed farms onto less fertile land, and farmers shipped the bounty back to the cities via a fossil-fuel-based transportation system. The system has changed little since.

Faced with less fertile soil, farmers started using synthetic fertilizers derived from petrochemicals to increase yields. Today, petrochemical fertilizer use is widespread. However, these nitrogen-rich fertilizers are not particularly efficient; only about 10–15 percent of the nitrogen makes it into the food we eat.23 The rest stays in the ground, seeps into water supplies, and makes its way into rivers and eventually oceans to create oxygen-depleted dead zones, like the one that extends from Louisiana to Texas in the Gulf of Mexico.24 This runoff flows through a very large loophole in the Clean Water Act. Where spring floods used to bring life, they now increasingly carry deadly concentrations of nitrogen, which artificially stimulate algal plumes, cutting off the oxygen to entire reefs of animal life. Hundreds of these dead zones afflict coastal regions around the globe.

It’s worth noting that following the bp oil spill in the Gulf of Mexico and during the run-up to the 2010 and 2012 election cycles, the ethanol industry pointed out that “no beaches have been closed due to ethanol spills.”25 Another somewhat true statement. Even though dead zones may not generate the same attention as a cataclysmic gulf oil spill, they are increasing in number and their detrimental impacts are intensifying.26

In the end, experts debate whether the energy obtained from corn ethanol is enough to justify the energy inputs to plant, plow, fertilize, chemically treat, harvest, and distill the corn into useable fuel. At best, it appears there is only a small return on fossil-fuel investment. Meanwhile, experts largely agree that Brazilian ethanol delivers a whopping eight times its energy inputs because it is based on sugarcane, not corn. Why don’t we do the same? The answer is simple—sugarcane doesn’t grow in Iowa.

Image
Illustration 4: Mississippi River dead zone Agricultural runoff is propagating a dead zone at the mouth of the Mississippi River. The dead zone now covers over five thousand square miles. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio)

Measuring the Scale of the Resource

But what if sugarcane did grow in Iowa? Could we grow our way out of the impending energy crunch? Industry forecasters measure the potential harvests of biofuel feedstock such as corn, sugarcane, and rapeseed using widely accepted yield tables. Yield tables simplify complex relationships between plant starch, sugar, and oil content into convenient estimates of biofuel output per acre. News articles, scientific papers, and policy documents give a voice to these wildly popular simplifications. In fact, through their repeated use, these numbers have crystallized to achieve an air of credibility that was never intended and under other circumstances may never have been achieved.27 Well dressed in formal gridlines, they evolved as a mismatched patchwork of numbers plucked from various contexts at various times in various places. Some average regional growing data, some come from experimental farms (often with unusually high yields), and others reflect yields from random individual farms. These entries often lack controls for climate, location, length of growing season, soil type, availability of fertilizer, agricultural management, technological influences, and other factors that dramatically influence crop yields. In practice, researchers might unwittingly draw upon yield data from a specific farm in France in a given year and extrapolate the numbers to speak for expectations on the other side of the globe in a village with not just a different physical climate but also a different economic, political, and social climate. Researchers might call upon that French farm to estimate global harvests over a period spanning several decades.

In some cases, yield tables provide practical crop estimations. In other cases, biofuel proponents can employ them as reputable cover for coarse overestimations. A team of researchers from the University of Wisconsin, the University of Minnesota, and Arizona State University claim that extending such narrow figures to estimate global biofuel production is “problematic at best.” Since the statistics usually come from farms within the prime growing regions for each crop, energy productivists routinely overestimate yields by 100 percent or more.28 Related investigations back up this team’s research.

In all, increasing crop yields globally will require new genetically altered plants, greater agricultural productivity, significant land use alterations, and more water—challenges that will be especially pronounced for poorer regions.29 According to numerous studies, including a prominent report from the National Academy of Sciences, land-use alterations can lead to local and global warming risks, which may in turn decrease crop yields.30 The Stanford Global Climate and Energy Project estimates that climate change will have varying effects on agriculture by region, but on average will decrease traditional crop yields as global temperatures rise.31

Another wildcard is water. Even though scientists have developed genetically engineered crops that are drought resistant, they have not been able to modify the fundamentals of transpiration. Therefore, they have been unable to design crops capable of producing significantly higher yields with less water— a limitation that will tighten further as water too grows dearer.

Carbon Dioxide and Climate Forcing

Biofuel proponents have long hyped their fuels for being CO2 neutral. In their idealized case, biofuel crops absorb carbon dioxide from the surrounding atmosphere as they mature and release an equivalent amount when burned. Most researchers now agree that the biofuel carbon cycle is not so straightforward. Critiques from the scientific community coalesce around four central points.

First, soils and crops can precipitously increase net CO2 where farmers employ destructive cropping methods or deforestation. In Indonesia, soil decomposition accelerated as developers drained wetlands to plant palm oil crops. As a result, every ton of palm oil production grosses an estimated thirty-three tons of CO2.32 Since burning a ton of palm oil in place of conventional fuel only has the potential to save three tons of CO2, the process nets an excess thirty tons of the greenhouse gas. This realization led the Dutch government to publicly apologize for promoting the fuel; Germany, France, and eventually the European Union followed.

Biofuel crops may overflow into rainforests, as the authors of a recent article in Science point out:

Regardless of how effective sugarcane is for producing ethanol, its benefits quickly diminish if carbon-rich tropical forests are being razed to make the sugarcane fields, thereby causing vast greenhouse-gas emission increases. Such comparisons become even more lopsided if the full environmental benefits of tropical forests—for example, for biodiversity conservation, hydrological functioning, and soil protection—are included.33


Altogether, rainforests are magnificent resources for humanity; they stimulate rainfall, provide vital services for local inhabitants, and act as large sponges for CO2. Biofuel proponents are quick to point out that sugarcane crops are not planted in Brazilian rainforests, but on pastureland to the south. They’re correct for the most part. But since the demand for meat and other food products has not dropped (demand for Brazilian cattle is actually increasing), land scarcity rules. Higher land values push a variety of new developments into existing rainforests. Many admire Brazil for having figured out a way to make their cars run on domestic ethanol, but by proxy, they may have simply found a way to make cars run on rainforests.

The second biofuel concern involves the reflectivity of the earth’s surface. In the earth’s higher latitudes, dark evergreen regions absorb more heat from the sun than lighter vegetation such as grass and food crops.34 In the tropics, this phenomenon reverses as evapotranspiration above forests generates reflective cloud cover. Rainforest development endangers this shield. While biofuel feedstocks may provide a short-term financial boon for many of the world’s poor farmers, the resulting land competition will ultimately degrade their most valuable community asset.

It makes more sense to grow biofuel feedstock on abandoned cropland. As perennial biofuel grasses absorb airborne CO2 and sink carbon into their roots, soil carbon content can modestly increase.35 However, these tracks of empty land lie mostly in cooler climates and suffer from poor soil quality. Yields won’t be impressive. Even if biofuel producers exploited every abandoned field worldwide, the resulting fuel would only represent about 5 percent of current global energy consumption.36

Third, researchers criticize the biofuel industry’s reliance on fossil fuels, from the fertilizers derived from natural gas and petroleum to the fuels employed to plant, treat, plow, harvest, ferment, distil, and transport the fuels. It’s difficult to calculate an entire life-cycle analysis for rapeseed biodiesel, corn-, and sugar-ethanol products since there are so many assumptions built into these models. Researchers who have undertaken the challenge come to different conclusions. Some argue these biofuels use more fuel and create more carbon dioxide than if we had simply burned fossil fuels directly. Others argue there is a benefit, even if it is not overwhelming.

Fourth, biofuel crop residues release methane, a greenhouse gas with twenty-three times the warming potential of CO2.37 Additionally, creating sugar, corn, and rapeseed biofuels yields considerable quantities of nitrous oxide, a byproduct of the nitrogen-rich fertilizers farmers use to grow the plants. Nitrous oxide’s global warming potential is 296 times that of carbon dioxide and additionally destroys stratospheric ozone.38

Ethanol might seem more attractive if it didn’t prompt food competition, net greenhouse gases, or require so much fossil fuel, water, and arable land. That’s precisely the promise of cellulosic ethanol.

Woodstock: The Promise of Cellulosic Ethanol

Instead of using foodstuffs such as corn or sugar, cellulosic ethanol producers harvest trees and grass that can grow within a variety of climates and require less fertilizer and water. Cellulosic ethanol is expensive, but proponents say it could ease the food-fuel competition. Cellulosic feedstock is rich with carbohydrates, the precursors of ethanol, but these sturdy plants lock away these calories inside fibrous stems and trunks. Extracting them is a tricky process.

Ethanol forms when sugars ferment, which is why Brazilian producers can process sugarcane into fuel almost effortlessly. Corn requires one additional step. Producers must mix the corn meal with enzymes to create the sugars. Wrestling ethanol from the cellulose and hemicellulose in grasses and trees is even more involved. Refiners must liberate sugars via a cocktail of expensive enzymes. Numerous firms are working to reduce the cost of these enzymes and some have even taken to bioprospecting to locate new ones, such as the digestive enzymes recently discovered in the stomachs of wood-munching termites.39 Yet bioprospecting brings its own set of ethical dilemmas.

Why bother? Because if perfected, cellulosic ethanol could potentially yield up to 16 times the energy needed to create the fuel—that compares favorably with corn ethanol, which arguably yields just 1.3 times its energy inputs, sugarcane ethanol, which yields about 8 times its energy inputs, and even regular old gasoline, which yields about 10 times its energy inputs. Furthermore, proponents claim that second-generation biofuels could someday cost as little as three dollars per gallon-gasoline- equivalent. But today, despite weekly breakthroughs in the field, cellulosic techniques remain prohibitively expensive and unproven on a commercial scale. Even the productivist- leaning Wall Street Journal calls for some degree of sobriety in formulating expectations for this moonshine of the energy world, stating that cellulosic ethanol “will require a big technological breakthrough to have any impact on the fuel supply. That leaves corn- and sugar-based ethanol, which have been around long enough to understand their significant limitations. What we have here is a classic political stampede rooted more in hope and self-interest than science or logic.”40

The Forgotten Biofuels

Even as legislators flood cellulosic ethanol and other biofuel initiatives with funding, some biofuel opportunities go overlooked, mostly because they are boring in comparison. For instance, wastewater treatment facilities release methane, the main component of natural gas, but more than 90 percent of America’s six thousand wastewater treatment plants don’t capture it. As mentioned earlier, methane is a major greenhouse gas liability since its venom is more potent than that of carbon dioxide. The sludge output of the average American yields enough power to light a standard compact florescent light bulb without end. So skimming the methane from an entire city’s wastewater would not only prevent harmful emissions but also would produce enough power to run the entire wastewater operation, perhaps with energy to spare.41 Although not a large-scale solution, captured bio-gas is a reminder of the modest opportunities to draw upon biofuels without advanced technology.

Another biofuel product that is now starting to gain more attention is a convenient replacement for firewood. Burning firewood directly is a relatively dirty practice, emitting dangerous particulates, hydrocarbons, and dioxins.42 In poor countries, the soot from firewood, waste, and dung kills about 1.6 million people per year. It’s also a local climate changer; soot darkens air and darker air absorbs more solar radiation. But there’s another way to extract energy from wood besides burning it—one that was widely employed before the Industrial Revolution but has since fallen by the wayside—charcoal (recently rebranded as biochar). When processors heat wood above 300° C with limited oxygen, in a process called pyrolysis, it spontaneously breaks into three useful fuels: biochar, heavy oil, and flammable gas. In addition to its use as a fuel, farmers can till their soil with biochar in order to reduce methane and nitrous oxide greenhouse-gas emissions.43 Archaeologists uncovered ancient South American settlements in which buried charcoal has been sequestered for thousands of years, lending interest to the concept of using biochar as long-term storage for excess carbon.

In all, there may be many benefits to implementing biochar techniques in place of burning wood and waste for fuel directly. But this doesn’t make biochar a global solution. Cornell researcher Kelli Roberts points out that large-scale biochar production, as envisioned by some eager biofuel productivists, could yield unintended consequences.44 As with other biofuel methods, if producers clear virgin land to grow biochar inputs such as trees and switch grass, the process could ultimately do more harm than good. Alternately, if producers grow biochar crops on existing farmland, farmers may be forced onto new land, yielding the same negative effects on virgin land plus the added risk of local food price instability. And then there is the hitch with any method for increasing available energy supply—it inevitability leads to growth, expansion, and increasing energy consumption— a reminder that smart upgrades in energy practices for local communities may not have the same positive effects if implemented on a larger scale.

Dreary Expectations

Researchers from the Carnegie Institution and Lawrence Livermore National Laboratory neatly sum up the limitations of biofuel technologies: “The global potential for biomass energy production is large in absolute terms, but it is not enough to replace more than a few percent of current fossil-fuel usage. Increasing biomass energy production beyond this level would probably reduce food security and exacerbate forcing of climate change.”45 The U.S. Department of Energy’s biofuel forecast is similarly tentative. It forecasts that biofuel use will only modestly expand, from 5 percent of primary energy supply today to about 9 percent in 2030. The agency lowered its previous expectations for the fuel, citing technical concerns about cellulosic ethanol development. 46 Even the International Energy Agency’s “450 Scenario,” which employs highly optimistic assumptions that the agency itself admits will be “very challenging” to realize, forecasts that biofuels will fulfill at most just 16 percent of primary energy demand by 2030.47

Not long ago, these unadventurous expectations for biofuels would have been heretical. America was in a fervor over rising oil prices, and pundits gleefully framed ethanol as the answer. In 2006 the National Corn Growers Association complained they were sitting on a surplus of corn, the Worldwatch Institute proclaimed that biofuels could provide up to 75 percent of transportation fuel in the United States, and Congress was trucking bales of public funds to Big Corn.48 The subsequent collapse of ethanol’s popularity may very well have been a dress rehearsal for wind and solar industries if the public comes to better understand the limitations of these schemes as well.

We now have every reason to suspect that large-scale biofuel production will require vast water resources, endanger areas reserved for conservation, intensify deforestation, and decrease food security. The net greenhouse-gas impact could be positive or negative depending on the type of feedstock plant materials, the biofuel production process, and the difference in reflected solar radiation between biofuel crops and preexisting vegetation. Ultimately, we might presume that biofuels will provide modest energy resources worldwide, but the most promising biofuel strategies are unproven on a commercial scale, may not be economical for some time, and will certainly entail side effects and limitations not yet well understood. Worthless? No. But certainly uninspiring.

Perhaps that’s why many people in the money have shifted their bets to another energy production technique, one that they are slowly resurrecting from its grave.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 6:03 am

4. The Nuclear-Military-Industrial Risk Complex

Boy, we’re sure going to have some wrecks now!

–- Walt Disney, upon constructing a model train to encircle his house


On March 16, 1979, Hollywood released a run-of- the-mill film that might have been rather unremarkable had the fictional plot not played out in real life while the movie was still in theaters. The China Syndrome, starring Jane Fonda, Jack Lemmon, and Michael Douglas, features a reporter who witnesses a nuclear power plant incident that power company executives subsequently attempt to cover up. Many days pass before the full extent of the meltdown surfaces. Just twelve days after The China Syndrome premiered, operators at the Unit 2 nuclear reactor at Three Mile Island, outside Harrisburg, Pennsylvania, received abnormally high temperature readings from the containment building’s sensors. They ignored them. Many hours passed before the operators realized that the facility they were standing in had entered into partial core meltdown. Power company executives attempted to trivialize the incident and many days passed before the full extent of the meltdown surfaced.

The China Syndrome went viral. When star Michael Douglas appeared on NBC’s The Tonight Show, host Johnny Carson quipped, “Boy, you sure have one hell of a publicity agent!” The staged nuclear leak filmed in the back lots of Hollywood and the real nuclear leak on Three Mile Island became conjoined, feeding into one another, each event becoming more vividly salient in the eyes of the public than if they had occurred independently. The intense media and political fallout from the leak at Three Mile Island, perhaps more than the leak itself, marked the abrupt end of the short history of nuclear power development in the United States.

Nuclear industry officials regularly accuse their critics of unfairly brandishing the showmanship of disaster as if it were characteristic of the entire industry while downplaying the solid safety record of most nuclear facilities. Indeed, meltdowns like the ones at Three Mile Island, Chernobyl, and Fukushima Daiichi don’t occur as frequently as oil spills. But then, the risks people associate with nuclear leaks are inordinately more frightening.

As with oil spills, journalists, politicians, and industry officials frame meltdowns as accidents, almost without exception, though, we could alternately choose to frame nuclear power activities as highly unstable undertakings that are bound to expel radioactive secretions into the surrounding communities and landscapes over time.

One of the largest single releases of atmospheric radiation into American communities, about seven hundred times that of Three Mile Island, cannot even plausibly be framed as an accident.1 The U.S. government deliberately planned and released this emission under the code name “Green Run” in a once-secret government compound so infrequently acknowledged that even President Obama claimed to have been unaware of the site during his time in the Senate.2 The facility would later rise to become the single largest recipient of his federal stimulus funds.

Green Run

In the early 1940s, a U.S. government convoy rolled into a small community in Washington State, inexplicably condemned private homes, shut down the high school, and hastily laid out foundations for over five hundred buildings on an area roughly half the size of Rhode Island.3 For a time, nearby residents had no idea what happened behind the gates of the enormous secret facility, which was named the Hanford Site. But on August 6, 1945, when U.S. forces dropped an atomic bomb on Hiroshima, Japan, its purpose became abundantly clear. The United States built Hanford to enrich plutonium, in a hurry.

After the war, Hanford’s purpose shifted (the first of many shifts). In an effort to judge how much plutonium the Soviet Union was processing during the cold war’s infancy, the Pentagon decided to take measurements from the dispersion of a known quantity of radioactive iodine–131, a byproduct of plutonium production, to be released at Hanford. During the night of December 2, 1949, the U.S. Air Force deliberately executed a sudden and clandestine discharge of radioactive iodine intended to disperse and contaminate the fields, communities, and waterways surrounding Hanford.4 The U.S. government kept the radioactive dispersion secret for nearly forty years until the Freedom of Information Act forced the Department of Energy to release the classified documents in 1986.5 Selected intelligence purposes remained secret until 1993.

After scientists expelled the radioactive cloud from Hanford, radiation levels in surrounding communities jumped to 430 times the then permissible limits. Hanford’s scientific team measured the highest levels of radioactivity in nearby plant life, 28 microcuries/ kg, or 2,800 times the 1949 permissible limit.6 For comparison, the Washington State Health Department now identifies any food product over 0.013 microcuries/kg as radioactive and unfit for consumption. Even given the high levels of free radiation in neighborhoods after the experiment, the scientists advocated for an even larger radioactive cloud in their official report, secret document ORNL–341. The passionless abstract of the report reads, “Very little information of a conclusive nature was gained concerning the diffusion. . . . Using a stronger source, is recommended.”7

Image
Figure 6: Secret U.S. government document ORNL–341 This 1949 chart from the declassified document ORNL–341 details fly-by radiation measurements following a planned government- sponsored dispersion of radioactive materials into the local landscape, waterways, and human settlements of southeast Washington State. This radiation release was roughly seven hundred times that of the 1979 Three Mile Island meltdown and was kept secret for nearly forty years. (Image courtesy of Oak Ridge National Laboratory)

Shown here is the record of the return run which was made at the level of the smoke and haze top and which was believed to be at the inversion top. The maximum return over the stack is appropriately lower than was received on runs at lower levels. The symmetry and strength of this return is as would normally be expected if the very top of the gaseous cloud had been traversed. The return remains above background for approximately 15 miles downwind.


Today, local residents still have unanswered questions. Some details remain classified. Nevertheless, the nineteen thousand pages of declassified government documents detail a long history of radioactive emissions from Hanford—emissions that contaminated the air, soil, groundwater, and the Columbia River. Sadly, the experiences of families from Hanford and others exposed to radiation from the Green Run experiments are far from unique. In 1995, the Department of Energy released documents showing that the U.S. government has sponsored at least several hundred secret releases of radiation throughout the United States.8

In addition to intentional radioactive dispersions and unintentional radioactive leaks, nuclear processing and power activities also churn out large quantities of varied radioactive waste, which carry an assortment of protracted risks and costs of their own. One notable example comes from within the Hanford Site itself—a ponderous reservoir labeled “Tank SY-101.”

Tank SY-101

If the construction, filling, and management of Tank SY-101 had been initiated in our day, it would surely become a leading news story throughout the world, though since government employees built it several decades ago, tucked away inside Hanford, its story is infrequently told. Over Hanford’s forty years of operation, workers refined sixty-four metric tons of plutonium, enough to fill two-thirds of the nation’s arsenal of roughly sixty thousand nuclear warheads.9 Between the plutonium enrichment, energy generation, and other activities at Hanford, the facility produced a magnificent sum of radioactive waste.

Efforts to clean up the site began twenty years ago, but the cleanup is not yet halfway complete, and project funding is endangered. Originally expected to cost $50 billion, overruns have forced estimates much higher. For instance, a proposed waste treatment plant’s budget skyrocketed from $4.3 billion in 2000 to over $12 billion by 2008. The Department of Energy expects to complete the plant in 2019 but it has already postponed construction three times.10

According the Department of Energy, Hanford contains:

• 2,100 metric tons of spent nuclear fuel

• 11 metric tons of plutonium in various forms

• about 750,000 cubic meters of buried or stored solid waste in 175 waste trenches

• about one trillion liters of groundwater contaminated above EPA drinking water standards, spread out over 80 square miles (contaminants include metals, chemicals, and radionuclides)

• 1,936 stainless-steel capsules of radioactive cesium and strontium, containing roughly 125 million curies of material in water-filled pools

• more than 1,700 identified waste dumps and 500 contaminated facilities

• more than 53 million gallons of liquid radioactive waste in 170 aging, underground single-shell tanks11

The faded hodgepodge of a sign marking the entry to the Hanford Site quietly deteriorates, along with a nearby assortment of massive storage tanks, which engineers originally designed as early as the 1940s to last no longer than a few decades.12 According to the Department of Energy, sixty-seven of the tanks have sprung leaks—releasing a combined one million gallons of radioactive waste into the local soil and groundwater, which is seeping into the adjacent Columbia River.13 Of the numerous tanks at Hanford, none has created more problems than Tank SY-101. Like many of the multistory tanks at Hanford, Tank SY-101 was filled with a largely unknown brew of mostly liquid radioactive waste. Since the tank’s contents did, and still do, constantly enter into various sorts of unpredictable internal reactions, the concoction’s chemical and reactive nature is no longer properly understood. For many years the slurry was not calm—a constantly evolving, hissing, sputtering brew topped with a crust sometimes prone to violent undulations and eruptions of radioactive and potentially explosive gases and fumes.14 In 1991 operators caught the tank’s contents on film, documenting what looked like an active lava flow. Lurching from side to side, splashing against the walls, and spitting gases exhausted by internal nuclear reactions, the entire mixture seemed alive. It occasionally agitated the tank, sometimes with enough force to bend metal components of its enclosure.15

Image
Illustration 5: Entering Hanford This fading sign ingloriously marks the entrance to the Hanford Site, a historically multiuse nuclear complex, now a makeshift nuclear waste reprocessing site of unprecedented proportions. (Photograph courtesy of Tobin Fricke)

Image
Illustration 6: A four-story-high radioactive soufflé This 1989 photograph offers a rare glimpse into the interior of nuclear waste containment Tank SY-101 at the Hanford Site. A lavalike crust encapsulates the top of a highly reactive, little understood nuclear waste slurry four stories deep. The mixture sputters angrily from around the edges of the tank in the upper part of the photograph. (Photograph courtesy of the U.S. Department of Energy)

Dedicated Hanford technicians repeatedly attempted to calm the angry concoction for years but the chemical chimera would smother sampling tubes, morph unpredictably, and evade numerous efforts to bring it into submission. With the threat of an uncontrolled dirty nuclear explosion in Washington State, Hanford workers developed special operating procedures to deal with the tank. In 1993, the Department of Energy ordered a large pump to circulate the toxic slurry so that its toxic gases would effervesce more consistently, like bubbles gently rising from a glass of radioactive carbonated soda. It worked—for a time—but then the crusty top, being less agitated, began to thicken, harden, and trap the gaseous effluents beneath. Subsequently the crust rose up in what investigative reporter Matthew Wald characterized as “a giant radioactive soufflé,” growing to a height of thirty- six feet and approaching the more fragile single-walled top of the tank.16 In a risky move, operators lanced the top crust of the giant gas-filled cyst with air jets, which effectively thwarted the expanding mass’s expansion. Some time afterward, the team was able to successfully pump out half of the container’s waste into another holding tank and cut each remaining quantity with an equal amount of water, effectively doubling the volume of radioactive waste but diluting it enough to calm the potion’s rambunctious nature.17

Even more rarely told than the story of Tank SY-101 is the story of how it changed Hanford. Over the years, the tank’s unprecedented nuclear risks became a central concern of the Department of Energy, one that presented Hanford operators with uncertain and extreme challenges. The Department of Energy coordinated multiyear research projects around the tank, and academics published papers on its behavior. Over time, modes of work within Hanford changed to accommodate the special treatment and operations required to look after the gooey beast. These forces pulled apart existing networks of workers, operating procedures, goals, and facilities at Hanford and reassembled them in a novel way. For some, the activities and politics surrounding Tank SY-101 signaled the end of Hanford as a closed, secretive, armaments production facility and marked its reincarnation as an obligatory checkpoint for environmental research and understandings into the containment and handling of nuclear waste.18 With the reorganization of Hanford, however, came a reorganization of risk perceptions and assessments. Imbedded researcher Shannon Cram argues that Hanford’s practices for reducing uncertainty actually constitute a fictional reality “in which workers are to blame for nuclear accidents.”19

Hanford is understood in different ways by different people. A government conspiracy project. An innovative waste-reprocessing facility. A dump. A place to work and make a living. A tombstone for the nuclear dream. The various meanings, ambiguities, and uncertainties tangled in Hanford’s barbed-wire enclosures are a messy rendering of global nuclear anxieties writ small. However menacing these portents may be, they won’t be enough to frighten away the gravediggers standing over nuclear power’s tomb, shovels in hand.

The Resurrection

Soon after World War II, the United States initiated a self-described “peaceful” atomic energy program, during the presidency of Dwight D. Eisenhower, in an effort to assure a wary world that it was interested in more than just military deployments of nuclear science. Congress quickly ramped up nuclear energy funding. It also applied legislative lubricants, such as the 1957 Price-Anderson Act, which limited the nuclear insurance industry’s liability to just $540 million per nuclear accident. These helped nuclear energy slide into America’s power grid and into the psyche of its citizens. At the time, very few critics stood up to nuclear power. When they did pop up, holding out technical and economic risks in their hands, nuclear proponents handily smacked them back down. Former Environmental Protection Agency special assistant Professor Walter Rosenbaum recalls that “when problems could not be ignored, they usually were hidden from public view; when critics arose, they were discredited by Washington’s aggressive defense of the industry.”20 By 1975 the United States had 56 commercial nuclear reactors online, 69 under construction, and 111 more planned. The nation hummed with a kind of nuclear intoxication that burrowed deep into remote nooks of popular culture, from industrial design to literature. Even a colorfully packaged children’s toy, called the Atomic Energy Lab, came complete with radioactive materials for young nuclear scientists to test and measure.

And the rest is history.

After a boom-boom here and a boom-boom there, fears grew that there might everywhere be a boom-boom. So the industry came to an abrupt halt, right there in the middle of the nuclear highway. And there it sat with its parking brakes on.

That is, until the summer of the year 2004.

It was that summer when the Department of Energy murmured support for the nuclear industry’s pleas to extend the lives of numerous aging nuclear power facilities as well as its plans to add fifty new power plants to the nation’s electrical grid.21 The next year, Congress coughed up enough money to push the plan into action. In fact, the Washington Post reported that the nuclear industry was the 2005 Energy Policy Act’s surprise “biggest winner.” 22 The act released more incentives to the nuclear industry than to wind, biomass, solar, geothermal, hydroelectric, conservation, and efficiency initiatives combined.23 In 2008 AmerGen Energy Company submitted an application to the U.S. Nuclear Regulatory Agency requesting a license to allow Three Mile Island Unit 1 nuclear power facility to operate until April 2034. On October 22, 2009, during the height of a national swine-flu panic, the application was quietly approved.

The Legend of the Peacetime Atom

Whether or not this renewed interest in nuclear power is good or bad depends upon whom you ask. For some, nuclear power marks an opportunity for low-carbon and independent energy generation while for others it represents a prescription for nuclear proliferation and fallout risks. Environmentalists in Germany, for instance, overwhelmingly rail against nuclear power, but environmentalists in Britain tend to support it. In Japan, nuclear power risks remained conceptually separated from the fallout horrors of World War II until the 2011 meltdowns at Fukushima collided them together.

In 2008, the Nuclear Suppliers Group, a cartel of forty-five nations that limits the trade of nuclear materials and technology, agreed to bend their rules in order to allow India access to uranium imports.24 When the waiver was first introduced, political sparring ensued between those who claimed such a move would lead to nuclear armament proliferation in the region and others who claimed the additional uranium marked a peaceful development of electricity that would benefit millions of Indians. So who’s right? Is nuclear power a way to produce electricity or a path toward building deadly weapons?

In reality, it is both.

The often-cited division between civilian nuclear power and military nuclear weaponry is problematic for several reasons. First, countries often end up desiring a bit of both—a little civilian electricity and a little nuclear weaponry. Political desires rarely congeal into exclusively one form or the other. Second, peacetime and wartime nuclear technologies are intermingled. For example, the power plant fuel rods, once spent, contain high concentrations of plutonium, which is useful for building bombs. Third, nation-states are in constant flux—politically, economically, and culturally—the motivations of a country today cannot be assumed to hold in the future. In practice, an exclusively peacetime uranium atom is as inconceivable as a coin with just one side. We’ll review each point in order.

First, in his book The Light-Green Society, historian Michael Bess illustrates how a nuclear-armed France was not born from a single directive, like America’s Manhattan Project, but instead rose up from a series of smaller, more subtle nudges:

Perhaps the most striking fact about France’s emergence as a nuclear power is that no single meeting or series of meetings, no single group of individuals working together, no single confluence of key events, can be identified as the point at which the nation’s leadership decided to endow France with nuclear weapons. Rather, what we find is a sequence of incremental “mini-decisions,” some technical in nature, some budgetary, some administrative— a long series of tacit compromises and gradual technological advances accumulating, sliding into one another, over a decade and a half. French politicians and scientists in the Fourth Republic appeared extremely reluctant to come out and openly say, “Let’s build a Bomb!” Instead, they always opted for keeping the door open, for continuing lines of research and technical development that would leave available the option of building atomic weapons in the future—without committing anyone to an explicit military policy in the present. And so, one by one, the pieces of the puzzle quietly came together: a steady expansion of funding, allowing the newly created cea, or Commissariat à l’Energie Atomique, to double its facilities every two years; a decision among scientists in 1951 to build a new generation of high-powered reactors, capable of producing weapons-grade plutonium; a top-secret cabinet meeting in 1954 under the prime ministership of Pierre Mendès-France, giving the green light for exploratory studies of a nuclear bomb; an international environment of increasing East-West tensions, coupled with the chilling prospect of West German rearmament within NATO; and growing pressures from the French military, which became ever more keenly interested in the physicists’ string of successes. All these developments gradually accumulated, with a weirdly impersonal but also seemingly irresistible momentum.25


The choice of France to build up a nuclear arsenal might better have been dubbed the “gigantic decision that no one made,” for by the time the de Gaulle government had announced it would take up nuclear arms in 1958, the thought had already tunneled its way into the heart of French institutions over the course of a decade, implicitly aligning a ready-made framework of chutes and ladders for weaponry development. In fact, less than two years passed between de Gaulle’s announcement and the detonation of the first French plutonium bomb at a Sahara test site.26

It didn’t have to turn out this way. The allure of nuclear weaponry attracted governments to uranium, rather than the arguably much safer element thorium, in building their reactors. Thorium is now gaining attention as an alternative to uranium. It’s less suited for advanced nuclear weaponry and therefore less likely to induce proliferation risks (though it’s perfectly suitable for low-tech dirty bombs). Thorium is also more naturally abundant than uranium and its waste products are easier to store because of their much shorter half-life. Nevertheless, since government interest in the fuel waned during the early years of the cold war and did not recover until recently, the thorium cycle will require significant research, development, and testing in order to overcome the technical hurdles that are currently preventing it from becoming a competitive alternative.

The second reason we can’t neatly separate nuclear technologies into piles of “peacetime energy” and “wartime weaponry” is that they share many of the same technological foundations. Technological optimists fervently assert that scientific advancements will deliver safer forms of civilian nuclear power. There is little argument that they’re correct. The caveat, however, is that to whatever degree they are correct, related technological advances for nuclear weaponry will advance as well. Consider, as an analogy, downhill ski resorts designed for recreational snow skiing. During the 1980s and 1990s, growing numbers of snowboarders realized that ski resorts provided easy access to some of the world’s sickest shank-high powder. Snowboarders promptly moved in, and in some cases smoked out the skiers altogether. Today, resorts throughout the world have rules, obligations, and practices that dictate who can use what hills, but the underlying technologies, the snowmaking equipment, plows, and gondolas, are the same.

Nuclear facilities are a tad more intricate, but the analogy holds: a ski hill could be repurposed for snowboarding just as a nuclear energy enrichment facility could be repurposed to enrich devices of destruction. However, unlike skiers and snow-boarders, military contractors and arms dealers have a seemingly limitless supply of financial resources and political backing. It’s treacherously naïve to assume that such a force will lie dormant during a global expansion of nuclear enrichment activities. Indeed, the armaments industry has even devised and tested bombs using byproducts from the purportedly peaceful thorium cycle.27 Just in case.

Due to all of this interchangeability, the United Nations formed an oversight organization called the International Atomic Energy Agency (IAEA) to prevent theft or diversion of fissile materials into the wrong hands. A two-year study by the Nonproliferation Policy Education Center, however, paints the picture of an IAEA that is overloaded with responsibilities, working with outdated assumptions, and essentially being asked to do the impossible. Over recent decades, the United Nations doubled the IAEA’s budget, yet it expects the agency to track six times as much highly enriched uranium and plutonium. Over a recent six-year period, IAEA inspectors, who visit monitoring locations every ninety days to manually download monitoring video, discovered twelve blackouts that lasted longer than thirty hours, plenty of time to covertly divert nuclear material to people with ominous intentions.28 Even in the best-run nuclear facilities, the IAEA reports substantial sums of “material unaccounted for.” When the IAEA could not account for sixty-nine kilograms of plutonium at a Japanese fuel fabrication plant in Tokai-mura, enough to build eighteen warheads, it ordered a $100-million plant disassembly in order to locate the material; ten kilograms were never found.29 In a report to Congress, the U.S. Government Accountability Office detailed several corresponding accounts of lost fissile material. Apparently, entire spent nuclear fuel rods occasionally go missing.30

If nuclear fuel enrichment operations expand, so will plutonium detours. So far, we have not discovered appropriate political, technical, ethical, and economic strategies to restrict fissile material to exclusively peacetime activities over the long term. Indeed, such strategies may not be plausible. Even if nations agreed upon preventative regulations and inspections for nuclear fuel materials today, subsequent ruling regimes may not recognize them. The political realities of an era greatly influence how leaders perceive and implement nuclear enrichment technologies. Peacetime nukes beget wartime nukes beget peacetime nukes in an unrelenting historical seesaw between prosperity and destruction. These risks call for contemplation to which we are generally ill suited. As the Department of Energy has strikingly acknowledged, there is no precedent to assume that the United States will remain a contiguous nation-state throughout the period required for nuclear waste to decay below safe levels. It is probably far more likely, some would argue certain, that America’s nuclear waste will someday fall under the direction of some other form of government.

This is complicated by the fact that containment facilities are not as simple as we might imagine. They aren’t as they appear in the beginning of mutant films—vast warehouses of vacant gray hallways idling silently except for the occasional clip-clop of a security guard’s shoes. These facilities require immense staffs to monitor sensing and tracking information on nuclear waste pools, perform regular analysis and research on unstable and evolving chemical waste slurries, maintain and repair vessels, and countless other tasks. The parking lots of these facilities are full. If they weren’t, we’d all be in a lot of trouble, quickly. If those people don’t show up to work some day, perhaps due to a disease pandemic, economic depression, political turmoil, or a natural disaster, the infrastructures established to maintain fissile material in a stable state could deteriorate or even tumble into chaos. With nuclear energy, we risk not only our own well-being but also the contamination of many, if not all humans, who occupy this planet after us.

Carbon, Again

These risks may be worth it, some say, since nuclear power generation produces less carbon dioxide than the alternatives and therefore promises to mitigate the potentially far greater risks of catastrophic climate change. For solar, wind, and biofuel power generation, the projected costs to mitigate a ton of CO2 are very high. Does nuclear fare any better?

Not really.

Assuming the most favorable scenario for nuclear power, where nuclear power generation directly offsets coal-fired base-load power, avoiding a metric ton of CO2 costs about $120 ($80 of which is paid by taxpayers). This figure does not include the costs of spent-fuel containment and the risks of proliferation and radiation exposure, burdens that are especially difficult to quantify. Again, this is far more expensive than boosting equipment efficiency, streamlining control system management, improving cropping techniques, and many other competing proposals to mitigate climate change. Why spend 120 bucks on nuclear to avoid a single ton of CO2 when we could spend the same money elsewhere to mitigate five tons, or even ten, without the risks? Nuclear energy might be a plausible CO2 mitigation strategy after we have exhausted these other options, but we have a long way to go before that occurs. This won’t, however, stand in the way of the nuclear industry’s expensive expansion plans. And by the way, the bill is on us.

The Total Bill

Every single nuclear plant was built with our help. Additionally, the nuclear industry incurs substantial capital write-offs, through bankruptcies and stranded costs, which leave the burden of their debt on others—a hidden and formidable set of overlooked costs.31 To make matters worse, economies of scale don’t seem to apply to the nuclear industry. Just the opposite in fact. Historically, as the nation added more nuclear energy capacity to its arsenal, the incremental costs of adding additional capacity didn’t go down, as might be expected, but rather went up.32

If the costs to taxpayers are so high, the risks so extreme, and the benefits so unexceptional, why do nations continue to subsidize the nuclear industry? It’s partly because so many of the subsidies are hidden. Subsidy watchdog Doug Koplow points out, “Although the industry frequently points to its low operating costs as evidence of its market competitiveness, this economic structure is an artifact of large subsidies to capital, historical write-offs of capital, and ongoing subsidies to operating costs.”33 The nuclear industry often loops taxpayers or local residents into accepting a variety of the financial obligations and risks arising from the planning, construction, and decommissioning of nuclear facilities, such as

• accepting the risk of debt default;

• paying for cost overruns due to regulatory requirements or construction delays;

• dropping the requirement of insurance for potential damage to surrounding neighborhoods; and

• taking on the burden of managing and storing high-level radioactive waste.

Since these handouts are less tangible and comprehensible to the public than cash payments, the nuclear industry and its investors have found it relatively easy to establish and renew them. One in particular is especially problematic.

The Decommissioning Subsidy

Travel two hundred miles off the northeast coast of Norway into the Arctic Ocean toward the shores of Novaya Zemlya Island and you’ll see seals, walrus, and aquatic birds as well as numerous species of fish, such as herring, cod, and pollack, much as you’d expect. But some of them will be swimming around an article less anticipated—a curious fabricated object rising above the dark sea floor like an ancient monument, identifiable only by the number, “421.” Inside the corroded steel carapace lies a nuclear reactor. Why, we might wonder, has someone installed a nuclear reactor under the sea so far from civilization? It wasn’t built there. It was dumped there—along with at least fifteen other unwanted nuclear cores previously involved in reactor calamities. These cores lie off the coasts of Norway, Russia, China, and Japan.34 Many of the reactors still contain their fuel rods. Resurfacing them and processing them in a more accepted manner would be risky and expensive. But even disposing of the world’s existing nuclear reactors that haven’t been tossed in the ocean won’t be a straightforward proposition. It costs hundreds of millions of dollars to carefully assemble a nuclear power plant, and it costs hundreds of millions to carefully disassemble one as well. The largest problem being, of course, what to do with the radioactive waste?

The Department of Energy started to construct a repository in Yucca Mountain, Nevada, to store the nation’s spent reactor fuel. It was to accept spent fuel starting in 1998, but management problems, funding issues, and fierce resistance by the state of Nevada pushed the expected completion date back to 2020.35 President Obama called off the construction indefinitely, slashing funding in 2009 and finally withdrawing all support in 2011. If completed, the Yucca Mountain crypt will cost about $100 billion.36 Even then, it’s designed to house just sixty-three thousand tons of spent fuel. More than that is already scattered around the country today.37

In the meantime, utility companies have been storing waste in open fields surrounding their plants. A large nuclear power reactor typically discharges twenty to thirty tons of twelve- to fifteen-foot-long spent fuel rods every year, totaling about 2,150 tons for the entire U.S. commercial nuclear industry annually.38 Taxpayers will end up paying billions to temporarily store this waste.39

Another option is to “recycle” the spent fuel into new fuel. However, reprocessing is expensive and leaves behind separated plutonium. Since plutonium is ideal for making bombs, many countries, including the United States, consider reprocessing a proliferation risk. Meanwhile, the United Kingdom, France, Russia, Japan, India, Switzerland, and Belgium reform their spent rods. They have separated a combined 250 metric tons of plutonium to date, more than enough to fuel a second cold war. Alternately, fast-neutron “burner” reactors can run directly on the spent fuel. This presumably sidesteps the plutonium issue, though such plants may not be commercially feasible to build.

A Hard Sell

The Colorado River flows through one of the largest natural concentrations of radioactive surface rock on the planet, containing about a billion tons of uranium in all. The levels of radiation are twenty times the proposed limit for Yucca Mountain and unlike the glass-encapsulated balls used to store radioactive waste, Colorado’s uranium is free ranging and water soluble. Berkeley physicist Richard Muller claims, “If the Yucca Mountain facility were at full capacity and all the waste leaked out of its glass containment immediately and managed to reach groundwater, the danger would still be twenty times less than that currently posed by natural uranium leaching into the Colorado River.”40 Does this mean Coloradans are exposed to more radiation than the rest of us? Yes—along with those in Los Angeles who regularly bathe and drink water piped in from the Colorado River. Yet the residents of Colorado and California, together with those of the nearby states—South Dakota, Utah, and New Mexico—experience the lowest cancer incident rates anywhere in the contiguous United States according to the National Cancer Institute—which all goes to show how tricky it is to assess complex radiation risks.41

According to early documentation of the Chernobyl nuclear reactor meltdown in 1986, the catastrophe exposed thirty thousand people living near the reactor to about 45 rem of radiation each, about the same radiation level experienced by the survivors of the Hiroshima bomb.42 According to a statistical scale developed by the National Academy of Sciences, 45 rem should have raised cancer deaths of residents near Chernobyl from the naturally occurring average of 20 percent to about 21.8 percent— or roughly five hundred excess fatalities. Nevertheless, deaths are only one of many measures we might choose to evaluate harm, and even then, what counts as a radiation fatality in the first place is not so clear and has changed over time. In 2005 the United Nations put the death toll at four thousand. And in 2010 newly released documents indicated that millions more were affected by the fallout and cleanup than originally thought, which in turn led to tens of thousands of deaths as well as hundreds of thousands of sick children born long after the initial meltdown.43 To make matters more complex, the concrete sarcophagus entombing the reactor is now beginning to crack— a reminder that it is far too early to complete a history of Chernobyl and its aftermath. We will have to wait equally long to assess the fallout at Fukushima Daiichi, which is now long after the tsunami still posing new challenges to our conceptions of acceptable radioactive risk.

Humans won’t be able to calculate nuclear risks as long as humans have nukes. Perhaps it is this very uncertainty that evokes particularly salient forms of nuclear unease. The emotive impulse that wells up in response to free radiation is a more visceral phenomenon than one bound to the shackles of calculation. Fossil-fuel executives should consider themselves lucky that the arguably more dangerous fallout from fossil-fuel use, which kills tens of thousands of people year after year, has not elicited a corresponding fear in the minds of the citizenry. As a society, we begrudgingly tolerate the fossil fuel–related risks of poisoning, explosions, asthma, habitat destruction, and spills, which regularly spawn tangible harms. Yet when it comes to nuclear power we slide our heads back on our necks and purse our lips with added skepticism. Whether the degree of our collective skepticism toward nuclear power is appropriate, or even justified, doesn’t really seem to matter. The public doesn’t need experts to tell them when to be terrified.

Image
Illustration 7: In the wake of Chernobyl Radioactive bumper cars lie silent in the abandoned city of Priypat near the Chernobyl reactor. (Photograph courtesy of Justin Stahlman)

As simple as fear, and as complex as fear, public angst has been the nagging bête noire of the nuclear industry. The relatively small leak at Three Mile Island provided ample motivation for the American public to yank back the reigns of nuclear power development for decades. The Fukushima meltdowns prompted nuclear cancellations across the globe. Could it happen again? Is it possible that taxpayers and investors could spend billions of dollars constructing a new generation of nuclear reactors just to have a hysterical public again shut the whole operation down following the next (inevitable) mishap? Absolutely. As taxpayers subsidizing the nuclear industry, we must worry not only about the risk of a hypothetical nuclear event with tangible consequences but also about an event with imagined consequences, especially if it should strike during a slow news week. Whether governments, taxpayers, politicians, and investors are willing to increasingly place these wagers will, more than technical feasibility, become the central nuclear issue in coming decades. Then again, some day we may find our choices on the matter to have dwindled. The more nuclear power we build today, the less choice we’ll have about it tomorrow.

Either way, should environmentalists make it their job to promote nuclear power? Proponents argue that nuclear energy produces less carbon dioxide than coal or natural gas. But this might not matter in the contemporary American context. There is little precedent to assume that nuclear energy will necessarily displace appreciable numbers of coal plants. In fact, historically just the opposite has occurred. As subsidized nuclear power increased, electricity supply correspondingly increased, retail prices eased, and greater numbers of energy customers demanded more cheap energy—a demand that Americans ultimately met by building additional coal-fired power plants, not fewer.44

It didn’t turn out that way everywhere. Take France or California, for instance. Residents of these regions enjoyed a different set of economic and legal rules that thwarted this perverse feedback loop.45 We’ll come back to them later.

Without first addressing the underlying social, economic, and political nature of our energy consumption, can we assume that nuclear power, or any alternative production mechanism, will automatically displace fossil-fuel use? Should the environmental movement address these underlying conditions before cheering on nuclear or alternative energy schemes? Should we perhaps view alternative energy as the dessert that follows a balanced meal? If so, we have plenty of succulently buttered vegetables to eat before moving on to the brilliantly frosted assortment of saccharine alternatives perched on the silver-plated platter before us.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 6:21 am

5. The Hydrogen Zombie

Ignorance is the undead’s strongest ally, knowledge their deadliest enemy.

–- Max Brooks, The Zombie Survival Guide


By the close of the first decade of the twenty-first century, the hydrogen dream might have seemed dead to any casual observer that happened to pass its rotting corpse on the side of the street. The financial foundations upon which the hydrogen economy stood had been reduced to a shadow. Numerous governments had slashed, yanked, and all but completely eliminated hydrogen funding. Corporations that hastily filled their pockets bringing hydrogen fuel cells to market eventually witnessed their balance sheets tumbling in flames just as quickly. Finally, after the crash and burn of the hydrogen economy, credit crises and financial upheavals swept away the smoldering ashes left behind. But soon after the fatality, something curious started to occur. Citizens beheld the New York Times dedicating a full-spread feature to the hydrogen economy and witnessed CBS News claiming that General Motors’ new hydrogen fuel-cell car was “a terrific drive with almost no environmental impact.”1 Long after the practical infrastructure for the hydrogen economy died, the hollow shell of the dream pressed on—a technological zombie.

Characterizing hydrogen as a zombie technology might seem a bit harsh for those enchanted by the idea of a hydrogen economy, but in fact, it has been called much worse by others—a pipe-dream, a hoax, or even a conspiracy. Nevertheless, these concepts are too blunt to carve out an intricate appreciation for the rise and fall of the hydrogen dream. A more nuanced rendering offers a peek into how diverse groups can coalesce around a technological ideal to offer it not only a life it would never have achieved otherwise but an enigmatic afterlife as well.2

The Hydrogen Economy in Sixty Seconds or Less

The idea of a hydrogen economy is based on two central components, hydrogen (the gas) and fuel cells (the contraptions that combine hydrogen and oxygen to create electricity). At the outset, it is important to correct the common misconception that hydrogen is an energy resource. Hydrogen is simply a carrier mechanism, like electricity, which energy firms must produce. Unlike sunlight, tides, wind, and fossil fuels, hydrogen gas does not exist freely on earth in any significant quantity. Processors must forcibly separate hydrogen from other molecules and then tightly contain the gas before distributing it for use. They most commonly derive hydrogen from natural gas (through steam hydrocarbon reforming) or less frequently from water (through electrolysis). Both processes are energy intensive; it always takes more energy to create hydrogen than can be retrieved from it later on. Hydrogen firms presumably won’t be able to change this restriction without first changing the laws of thermodynamics and conservation of energy.

Historians generally credit Sir William Grove for devising the first fuel cell in 1839, although it was another fifty years before chemists Ludwig Mond and Charles Langer made them practical. The internal combustion engine revolution overshadowed early fuel cell research but proponents slowly coaxed the technology along. Eventually NASA and General Electric unveiled the first modern platinum fuel cell for the Gemini Space Project. In the 1970s the U.S. government, scrambling to respond to oil embargos, began working more closely with industry to advance fuel cell research. In the 1980s car manufacturers joined in.

The Early Years

By the early years of the twenty-first century, almost every automotive company had initiated a fuel cell program. At the 2006 Los Angeles Autoshow, then California Governor Arnold Schwarzenegger stood on a stage to christen fuel cell vehicles as “the cars of the future.” Shortly after, BMW CEO Dr. Michael Ganal took to the podium and declared: “The day will come when we will generate hydrogen out of regenerative energies, and the day will come when we will power our cars by hydrogen. This means no exploitation of natural resources anymore; this means no pollution anymore. We know there is a far way to go, and the new BMW Hydrogen 7 is a big step towards the future.”3

BMW’s Hydrogen 7, along with its numerous American counterparts, such as the Chevrolet Equinox and Jeep Treo, might never have been built if not for one of George W. Bush’s earliest projects, the National Energy Policy Development Group, headed by Dick Cheney and charged with identifying future energy markets. The group immediately locked in on hydrogen. It identified the elemental gas as the “future,” dubiously referring to hydrogen as an “energy source” that produced but one “byproduct,” water.4 They mentioned few details about how hydrogen might be produced, beyond the claim that it could be created with renewable resources. Most shockingly, the report explicitly considered nuclear power and fossil fuels to be “renewable energy sources.”5

This remarkably generous definition proved quite useful— especially when it came to enrolling supporters. The commission invited CEOs of British Petroleum, DaimlerChrysler, Ford, Exxon, Entergy Nuclear, the National Energy Technology Laboratory, Texaco, Quantum Technologies, and the World Resources Institute to help draft the boilerplate language describing hydrogen that would be adhered to by all constituents and subsequently copied and pasted into ad campaigns, pr initiatives, and annual reports, more or less word-for-word. In 2002 the Department of Energy (doe) formalized the marching orders with two reports. One of the reports concluded that the government should treat dissenting views of hydrogen as “perceptions based on misinformation,” which should be “corrected.”6 A subsequent report claimed, “The government role should be to utilize public resources to assist industry in implementing this massive transition and in educating the public about fuel cell vehicles’ safety, reliability, cost and performance.”7 The doe determined that the public reeducation campaigns were to start as early as grade school. The European Commission adopted corresponding language and education campaigns following the same script, which energy multinationals presumably transferred overseas from Washington.

It may not be immediately evident why traditional energy giants were so keen on hydrogen. But it may start to make sense when we consider the enormous quantity of energy required to pry hydrogen atoms from their molecular resting places. Hydrogen reformation can easily consume more fossil fuel than simply deploying natural gas and coal in their traditional dirty manners. Coordinators of California’s “Hydrogen Highway” even admitted that their vehicles led to more particulate matter and greenhouse-gas emissions than gasoline-powered vehicles on a well-to-wheel basis.8 Still, promoting hydrogen as a clean fuel that energy companies could create using “renewable energy sources” promised to offer particularly valuable environmental cover for dirty fossil-fuel operations. If only mainstream environmental organizations could be brought on board . . . but how?

Throughout the world, numerous geothermal sources, wind farms, and industrial processes emit excess energy that producers can capture, convert, and store in the form of hydrogen, at some cost. These overflows were, and are still today, far too rare and inadequate to produce appreciable sums of hydrogen— certainly not enough to run an economy on the stuff. But the concept was nevertheless alluring, even if it was far-fetched. Mainstream environmental organizations took the bait. Soon they were walking hand-in-hand with fossil-fuel giants to celebrate the hydrogen dream together.9 So long as the public associated hydrogen with windmills, it didn’t really matter how much production occurred behind the scenes using natural gas or coal reformation. The trope was complete, almost.

Automotive companies danced with fuel cells in the public spotlight, world governments got to task reeducating the public, environmentalists cashed in their principles for power at a suspect rate of exchange, and fossil-fuel companies stood guard over the whole operation. But another industry waged its bets less conspicuously. In fact, insiders sometimes characterize it as the scowling director of the hydrogen ballet—a cane in one hand and a cigarette in the other, winkled inconspicuously into an offstage seat overlooking the scene. The nuclear power industry.

The nuclear industry’s ambitions corresponded to those of the fossil-fuel industries, save for one small twist. Given the limitations of solar and wind power, the nuclear industry expected to position itself as the only “clean” solution for manufacturing hydrogen on a large scale. A new nuclear plant had not been built in the United States for decades. Hydrogen provided a convenient opportunity to reposition nuclear power as an environmentally progressive undertaking for the nation to pursue— one that would eventually end up costing taxpayers dearly.

Many years before BMW unveiled the Hydrogen 7 at the Los Angeles Autoshow, the nuclear industry lobbied Congress to direct a sizable chunk of the energy budget toward a Next Generation Nuclear Plant (NGNP). Like any nuclear power plant, the NGNP would generate electricity. It would also produce hydrogen. The nuclear industry had a persuasive proponent, Congressman Darrell Issa, a powerful energy productivist and Congress’s wealthiest member, worth about $250 million according to the Center for Responsive Politics.10 During a 2006 congressional hearing before the House Subcommittee on Energy and Resources, Representative Issa could not have been more explicit about the link between the nuclear industry and the hydrogen dream. He stated that the Next Generation Nuclear Plant project was “a key component in the Administration’s plans to develop the ‘hydrogen economy’ because an associated purpose of the advanced demonstration plant is to produce hydrogen on a large scale.”11 If anyone had been uncertain of the nuclear industry’s interest in the hydrogen wager, Issa put those concerns to rest. The nuclear industry was in.

In addition to the nuclear industry, car companies, California politicians, environmental groups, and fossil-fuel giants, there were plenty of others excited to get in on the action. Academic researchers, scientists, journalists, and of course the fuel cell manufactures themselves all had something to gain from the hydrogen promise. Had hydrogen interests not gravitated to form this emergent superstar, the powerful ideal behind this tiny molecule, the hydrogen dream itself, might never have been. It’s worth mentioning that this particular alignment of interests looks a lot like the unorganized gravitations that stabilize solar, wind, and biofuel technologies. However, unlike the strong bonds holding together the larger dream of an alternative energy future, the ties binding the hydrogen dream eventually began to loosen.

The Undoing

A growing minority of energy analysts started discounting hydrogen as nothing but hype. Others characterized it as an outright hoax. Hydrogen critics argued that any meaningful hydrogen economy would have required not just a couple of large breakthroughs but rather numerous breakthroughs, each monumental in their own right. Their attacks came from multiple fronts, covering hydrogen production and transport as well as its eventual use in fuel cells.

Once hydrogen is created, critics maintained, the challenges to employ it as a fuel multiply. First, hydrogen must be contained, either as a supercooled liquid below -253° C or as compressed gas. Both processes are energy intensive. High-pressure pumps consume about 20 percent of the energy in the hydrogen for compression, while liquefaction wastes 40 percent of the embodied energy.12 Maintaining hydrogen in its liquid form requires specially insulated vessels and massive refrigeration power. For instance, BMW’s car of the future stored cryogenic liquid hydrogen in its tank. While parked, the supercooled liquid warmed inside the car’s thirty-gallon tank. Internal sensors allowed the explosive gas to build to a maximum pressure of 5.5 bar, at which time (assuming everything worked correctly) the hydrogen would overflow through a pressure valve, combine with oxygen, and drip onto the ground as water. As long as the car was parked, the process would continue until the tank was empty. Comedian and automobile enthusiast Jay Leno drove the car around Burbank, California, during a celebrity test drive, later joking: “So a guy drives into a bar in a BMW Hydrogen 7 and the bartender says, ‘What do you want?’ The guy answers, ‘I just came in to take a leak.’” Reporter Matthew Phenix quipped in a wired magazine report that the derisory Hydrogen 7 was “saving the world, one P.R. stunt at a time.”13 Not a particularly glowing endorsement from perhaps the nation’s most techno-friendly news source.

In other concept vehicles, on-board supercoolers held hydrogen in its liquid form, but these units had to be powered 24-7. Another option involved blocks of solid metal hydrides—essentially giant heavy sponges capable of soaking up hydrogen—but even after years of development such schemes proved about as clumsy as one might imagine. The most promising concept vehicles stored their hydrogen as a compressed gas, but the tanks were ponderously heavy unless crafted from expensive carbon fiber, which is still prone to exploding in a crash.14 Even though many researchers believed these tanks would be as safe as natural gas or gasoline storage, convincing the public would be the larger hurdle. Might drivers feel hesitant to zoom down the freeway atop pressurized tanks of a hot-tempered gas that has an atomic bomb named after it and brings to mind an exploding Zeppelin?15

Critics also attacked the proposed hydrogen transportation and filling-station infrastructure. There’s the obvious chicken-and-egg problem: Why would car companies produce hydrogen vehicles without hydrogen filling stations and why spend billions for an infrastructure without an existing market of hydrogen-vehicle owners waiting to fill up? True, legislative tools could have spurred construction of a national network, as was begun in California, but it would have cost half a trillion dollars in the United States alone, according to a Department of Energy chief.16

Even then, distributing the gas to filling stations would have been grueling. A tanker truck can carry enough gasoline for eight hundred cars but can only hold enough hydrogen for eighty.17 The requisite back-and-forth trips would have consumed enough diesel to offset 11 percent of the hydrogen’s energy after just a 150-mile jaunt, according to one critic.18 Distributors could have shortened the trips by building pipelines, but hydrogen requires novel pipeline technologies because it embrittles metal pipeline walls, couplings, and valves. Moreover, hydrogen molecules are excellent escape artists; they are tiny enough to squeeze their way through the narrow molecular-level gaps within solid materials. As a result, hydrogen can seep right through the walls of a solid pipe, a serious consideration given the large surface area of a distribution pipeline.19

Reforming natural gas into hydrogen right at filling stations could have bypassed some distribution concerns, but small-scale reformers are expensive and inefficient, especially if they must store the carbon dioxide exhaust. Hydrogen hecklers argued that the nation’s drivers would be better off simply pumping natural gas directly into their vehicles rather than going through the trouble of reforming it into hydrogen, which would contain less energy, yield the same greenhouse gases, occupy three times the volume, and limit use to fuel-cell vehicles that were incredibly expensive.

However, hydrogen proponents proposed a cleaner way to secure hydrogen: electrolysis, a process wherein electricity separates water into hydrogen and oxygen. Environmentalists envisioned that wind turbines and solar panels would power electrolyzers. Meanwhile, fossil fuel and nuclear industry executives knew they didn’t have to worry about solar or wind taking over anytime soon. In 1994 researchers erected a solar-powered hydrogen station, called Sunline, outside of Palm Springs, California, but it took ten hours of solar electrolysis to produce just one kilogram of hydrogen, the energy equivalent of about one gallon of gasoline. Hooked to the grid and drawing power from nearby wind turbines, the station could have theoretically produced up to sixteen kilograms of hydrogen per day, “assuming optimal season conditions,” according to Sunline’s own calculations. 20 Even if there was enough excess solar and wind power available in the American grid to electrolyze hydrogen, the costs of the massive sums of raw electrical power along with the requisite transformers, electrolyzers, explosion-proof compressor pumps, cryogenic refrigerators, and insurance would have been “economic insanity,” claimed critic, Robert Zubrin. In a critical exposé, he remonstrated that for an ongoing investment of “$6,000 per day, plus insurance costs, you could make $200, provided you can find fifty customers every day willing to pay triple the going price for automobile fuel.” He continued, “I don’t know about you, but if I were running a 7-11, I’d find something else to sell.”21

Nevertheless, another concern was eclipsing the difficulties of creating the hydrogen fuel itself. Critics narrowed in on the high cost and durability of fuel cells, the electro-chemical devices that could combine hydrogen and oxygen to create usable electricity for cars, buildings, laptops, and other devices. There was no doubt that when coupled with an electric motor in a vehicle, fuel cells were more efficient than internal combustion engines. However, critics pointed out that, as designed, the fuel cells only had an operational life of about thirty thousand miles and therefore would have to be replaced more frequently than a car’s brake pads—and the fuel cells weren’t cheap.22

Despite the billions spent to commercialize fuel cells, they remained unaffordable partly due to the high cost of platinum, a catalyst sparingly applied to fuel-cell membranes in layers just a few atoms thick. Even at these reduced concentrations, economists warned that large-scale fuel-cell production could spark platinum price bubbles, tilting the overall scheme into an uphill economic challenge unless manufacturers could identify a cheap platinum substitute.23

In short, critics argued that automotive fuel cells had proven to be extraordinarily expensive, finicky in bad weather, short-lived, and prone to molecular clogging, which dramatically reduced their efficiency. Joseph Romm, former director of the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy, observed, “If the actions of Saddam Hussein and Osama Bin Laden and record levels of oil imports couldn’t induce lawmakers, automakers, and the general public to embrace existing vehicle energy-efficiency technologies that will actually pay for themselves in fuel savings, I cannot imagine what fearful events must happen before the nation will be motivated to embrace hydrogen fuel cell vehicles, which will cost much more to buy, cost much more to fuel, and require massive government subsidies to pay for the infrastructure.”24

Hydrogen’s future wavered.

The Fall

It began just like any other bubble. By the early years of the twenty-first century, the costs of commercially viable fuel cells had triumphantly dropped from the tens of millions of dollars per unit in the 1960s to below $100,000. Stocks of fuel-cell and hydrogen- related component manufacturers, such as Ballard Power Systems and Millennium Cell, sprang to all-time highs. And in 2006 Popular Mechanics magazine predicted that fuel cells could cost as little as $36,000 if mass produced.25 That year, a company called Smart Fuel Cell hit the stock market commanding an impressive market capitalization trading at $150 per share. But the banana peel was already laid out.

Platinum prices were rising. In the early years of the century, spot prices for platinum doubled. Even as pundits proclaimed that the rare metal was overpriced, it doubled again by 2008. It wasn’t just platinum prices that were making investors jittery; traders took note that even though fuel-cell firms were not burning oil, they were quickly burning though cash. “The bread-and-butter profits we need to see are years away. It’s not even a niche market yet,” observed John Webster, coauthor of a Pricewaterhouse-Coopers investigative study on fuel cells.26 Esteemed industry analyst David Redstone pointed out that even though Ballard Power Systems had “a great public relations machine,” and politicians were “interested in fuel cells,” the industry as a whole had overpromised. “There is not a stream of commercial revenue. There are not products. Overpromising and underperforming leads to investor disappointment,” claimed Redstone.27 Investors fled.

A year after Smart Fuel Cell’s issue, the stock had dropped from $150 to $50, and by 2008 it was trading below $15. Ballard Power Systems, which had been trading at over $100 per share, plummeted to $4. Investors slashed the market capitalization of Millennium Cell in half by 2002, then again by 2004, then again by 2006—the same shares that attracted investors at $25 in 2000 were having a difficult time finding support at the five-cent level in 2009. By 2011, keepsake investors could buy twenty-five shares for less than a penny.

The smart money left, and so did the politicians. Posthaste. Originally, Schwarzenegger had forecasted a “Hydrogen Highway” with some two hundred filling stations by 2010, but by 2009 the state had completed only a couple dozen and the project had stalled even before he left office in 2011. Governor Schwarzenegger dropped the hydrogen dream as quickly as he had picked it up. So did President G. W. Bush. After becoming a central feature to his energy plan in 2003, he didn’t even utter the word “hydrogen” during his State of the Union address in 2007, or at any time thereafter. Finally, late one evening in 2008, Bush slipped into hydrogen’s bedroom and slid a dagger into the movement’s frail heart by quietly pulling funding for a FutureGen coal-to-hydrogen production facility, which proponents considered a key element in realizing their dream.28 During his first months in office, president Obama marched on to the scene, grabbed the dagger, and twisted. He finished the kill by proposing to eliminate the remaining $100 million of funding from the federal government’s hydrogen fuel-cell venture with carmakers.29 At the wake, Obama’s energy secretary, Steven Chu, stood up to say a few words. “We asked ourselves, ‘Is it likely in the next 10 or 15, 20 years that we will convert to a hydrogen car economy?’ The answer, we felt, was ‘No.’”30 Chu renamed the hydrogen fuel-cell group and recommended reorienting remaining fuel-cell research away from vehicles and toward a few low-prestige applications such as building power backup and battery replacement.

Betrayed, broke, stabbed in the back, and finished off—the hydrogen economy was most certainly dead. But it was still moving.

The Undead

After yielding the stage to the hydrogen skeptics, it’s difficult to imagine the “hydrogen economy” as anything more than a smokescreen designed so that political and corporate elites might dazzle us as they shuffled energy subsidies behind their backs. While there may indeed be a good bit of shuffling going on in Washington, the lesson behind hydrogen, as with many other energy technologies, is far more nuanced than the existing assemblage of pyrotechnic literature on the subject might indicate.

The high-flying hopes and dreams for a hydrogen economy encountered severe turbulence around 2006—not a good sign for an industry premised on hopes and dreams. Early on, investors determined that automotive fuel cells were nothing more than glorified science-fair experiments, hardly a reasonable basis for alleviating smog, CO2 emissions, conflicts, and costs associated with the nation’s ever-raging dependence on fossil fuels. Steven Chu, the Nobel Prize–winning energy secretary, was similarly put out. Nevertheless, well-established hydrogen promoters continued to pawn off these alleged snake oil liniments on the public. It wasn’t just environmental groups, carmakers, and mainstream energy companies, but also political representatives through many levels of state and federal government and, for a time, all the way to the Oval Office.

Critics claim that we spent billions of our hard-earned dollars and all we have to show for it are a few hydrogen vehicle “screwvenirs.” Zubrin concluded, “The hydrogen economy makes no sense whatsoever. Its fundamental premise is at variance with the most basic laws of physics. The people who have foisted this hoax on the American political class are charlatans, and they have done the nation an immense disservice.”31 Many hydrogen detractors conclude that the hoax was intentional and that the truth was somehow kept secret. But the formal and informal coordination between regulators, politicians, scientists, environmentalists, and corporations did not present the possibility for an outright conspiracy of the sort plotted by suspense novelists. Only the most tightly controlled organizations can hold a slippery secret in their grasp without it being leaked. Had such a diversity of people been in on a hydrogen ruse, someone would have eventually squealed. Yet if nobody manufactured a hoax, then how was the effect of a hoax created?

This question becomes even more puzzling. Even though Wall Street had handily dismembered the hydrogen industry, the reversal in fortunes didn’t faze the public, scientific, or media enthusiasm surrounding hydrogen in subsequent years. Numerous government and university research budgets and disbursements, which had been preplanned in years prior, were still flowing. The nuclear establishment, hardly prepared to loose face, nervously kept its hydrogen sights on autopilot. Car company pr and advertising departments still found it useful to tout their fuel-cell concept cars to a public that was apparently unable to recognize the promotions as outdated greenwashing. Journalists were evidently no savvier; with prohydrogen press releases still streaming into its news office, the New York Times published a prohydrogen feature in 2009, in which it embarrassingly cheered on an industry and its associated product lines that had essentially been bankrupted years previously. The Times was not alone.

Even though the hydrogen economy had died, it was still busy posing for photo shoots, presenting at environmental conferences, speaking for the automotive industry, booking international trips, and eating at fine restaurants. It had even orchestrated a coup d’état in Congress to partially reinstate its funding. The hydrogen economy was not dead, but undead.

How was this possible?

Some might claim the hydrogen economy was never really alive to begin with; it surely never existed in any tangible way. Few people had ever seen a hydrogen vehicle, let alone driven one. The hydrogen economy was nothing more, and nothing less, than a dream—a damn good one. It allowed people the luxury of imagining a world of abundant energy, a clean utopia where the only pollution would be water vapor, with enough credible science mixed in to make the whole affair seem plausible.

It isn’t the first time that critical environmental inquiry has been displaced by such utopian lobotomies. Since the 1970s, environmentalism in America and Europe has gravitated toward the theme of “ecological modernization”—the idea that the treadmill of technological progress will solve all environmental troubles. Martin Hultman, a researcher at Sweden’s Linköping University, compares the utopian visions surrounding the hydrogen economy with the ones once envisioned by proponents of nuclear power. “They are similar in that they both invoke the dream of controlling a virtual perpetuum mobile, propose an expert-lay knowledge gap, downplay any risks involved, and rely on a public relations campaign to ensure the public’s collaboration with companies and politicians,” asserts Hultman. “The idea that the level of energy use is unimportant and not connected to environmental problems is constructed by describing fuel cells as intrinsically clean in themselves and producing only water as exhaust.”32

Others might say the hydrogen economy never died. After all, technologies are more than just physical artifacts—the gears, the batteries, the circuit boards. Technologies are a hybrid of intentions, interests, promises, and pretensions. Technologies are stories. If they weren’t, they’d never catch on. The story of the hydrogen perpetual motion machine could not have been formed and fueled by just any single interest group, any single conspirator, or any single hoaxer as it were. As with solar, wind, and biofuel technologies, the hydrogen dream arose from a complex alignment of interests coalescing to synchronize a future narrative—one that featured selected benefits and diminished or overlooked associated side effects and limitations.

Elected officials, many of whom had worked in the energy sector and were tacitly imbued with its productivist cant, stood to gain both donors and constituents by supporting clean hydrogen.33

Gas, coal, and nuclear industrial elites knew there was money to be made and valuable cover to be gained by articulating clean hydrogen visions.

Researchers knew their work would be funded if it was framed as a national priority.

Environmentalists could feel good about their work, while gaining public and financial support by pledging allegiance to the clean fuel of the future.

Automotive manufacturers saw opportunities for subsidies, profits, and most of all a clean public relations cloak, offering protection from those who saw their industry as socially and environmentally destructive.

And the greater public, primed with the verve of ecological modernization, was willing, perhaps even eager, to be convinced that hydrogen was, in fact, the future of energy.

It is perhaps too early to write a history of the hydrogen economy (though the Smithsonian’s National Museum of American History has begun just that). Lesser stories have created much more.34 The cluster of technologies surrounding the hydrogen dream is sure to be resurrected by various interests. They will push to develop more economically viable hydrogen systems, especially for niche applications such as power backup, battery replacement, and the like. Nuclear hydrogen will likely reach a hand up from the grave at some point, but critics are certain to shackle it to their litany of limitations. For our purposes today, however, this technological zombie hardly presents a passable response to our energy problems. Its foul smell alone indicates it has no place in mannerly conversation on the subject.

In any case, there is a more chilling issue to address. It appears that there’s more than one zombie in our midst.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 9:35 am

6. Conjuring Clean Coal

Call it a lie, if you like, but a lie is a sort of myth and a myth is a sort of truth.

–- Edmond Rostand, Cyrano de Bergerac


The first major earthquake recorded in Australian history rocked residents of Newcastle on December 28, 1989. Ten years later, on the other side of the planet, an earthquake hit Saarland, Germany. Separated by time and space, these anomalous quakes might have seemed completely unrelated. They weren’t.

Geophysicists have controversially identified a common trigger: coal mining. They point out that coal-mining operations can collapse land surfaces, divert waterways, and drain wetlands. Generations of mining can induce quakes that are now compromising previously seismically stable regions throughout the world. Newcastle’s earthquake led to deaths, injuries, and $3.5 billion in damage, more than the value of all of the coal ever extracted from the region.1 Nevertheless, earthquakes may rank among the lesser concerns of mining, processing, and burning this fuel.

Coal in Sixty Seconds or Less

Archaeologists trace coal use back to China and Roman Britain during the Bronze Age, about three thousand to four thousand years ago. Today, nations burn coal to generate electricity, distill biofuels, heat buildings, smelt metals, and refine cement. Half of America’s electricity comes from coal, along with 70 percent of electricity in India and 80 percent in China.2 Coal is more widely available throughout the world than hydropower, oil, and gas. It’s got a lower sticker price too. Understandably, coal attracts world leaders concerned about regional energy security. For these geographic, economic, and political reasons, it is problematic to assume that countries will willingly stop unearthing their coal reserves unless cheaper and equally secure alternatives arise. Even in rich Australia, a federal minister quipped, “The coal industry produces 80 percent of our energy and the reality is that Australia will continue to rely on fossil fuels for the bulk of its expanding power requirements, for as long as the reserves last.”3 In fact, the International Energy Agency (IEA) expects the coal sector’s growth to outpace all other sectors, including nuclear, oil, natural gas, and renewables.

The problem, of course, is that despite its many benefits, coal is still dirty—in so many ways. Burning coal releases more greenhouse gases than any other fossil fuel per unit of resulting energy; it yields more than two times the CO2 of natural gas. Coal features infamously in dialogs involving international ethics, workers’ rights, and community impacts more broadly. Here’s the critics’ short list:

• Air pollution: The sky’s vast quantity of visible stars reportedly shocked Beijing residents during a coal-burning ban in preparation for the Olympic Games in 2008. During the late 1800s and early 1900s, London, New York, and other industrialized cities were known for their characteristic coal smog that killed thousands of people. Today, air quality in these cities is much improved due largely to better coal-burning practices. However, combustion is still the primary source of heavy metals, sulfur dioxide, nitrogen oxides, particulates, and low-level ionizing radiation associated with coal use. Toxic emissions from mining and transportation are also significant.4

• Water contamination: Mining, transporting, storing, and burning coal also pollutes water aquifers, lakes, rivers, and oceans. Coal-washing facilities alone eject tens of millions of tons of waste into water supplies every year.5

• Land degradation: Above-ground coal mining destroys prairies, levels forests, and lops off mountain peaks.

• Fly-ash waste: Coal plants generally capture fly ash, a byproduct of coal combustion, which often ends up in unlined landfills, allowing toxins to leach out or blow away.6

• Occupational risks: Poisonous gases, tunnel collapses, flooding, and explosions kill thousands of coal miners every year. Tens of thousands are seriously injured and exposed to long-term respiratory hazards including radioactive fumes.

• Community health risks: One prominent study published in Science reviewed the widespread practice of mountaintop removal coal mining in the United States and found that local residents also suffer from unusually high rates of chronic pulmonary disorders, hypertension, lung cancer, chronic heart disease, and kidney disease.7 The report’s lead author, Margaret Palmer, of the University of Maryland, stated, “Scientists are not usually that comfortable coming out with policy recommendations, but this time the results were overwhelming . . . the only conclusion that one can reach is that mountaintop mining needs to be stopped.”8

This list contains but a fraction of the concerns that critics voice regarding coal. We might think we should use a lot less of the stuff—unless, that is, we start to believe it too can be clean.

Ladies and Gentlemen, Look at My Right Hand

In recent years, the coal industry has applauded itself for introducing smokestack “scrubbers” into many of its plants, which spray water and chemicals directly into exhaust fumes to filter out contaminants. Less celebrated is the story of the resulting sludge. As a matter of general practice, it’s simply dumped into nearby lakes, rivers, and streams—the same waterways that supply drinking water to the general public—right where much of it would have ended up if it hadn’t been scrubbed out in the first place.9 Fierce lobbyists shield this dumping from regulation.10 The Clean Water Act limits some pollutants but not the most dangerous ones, such as arsenic and lead, which coal-fired power plants emit.11 Even then, plants routinely violate provisions of the Clean Water Act, according to a New York Times investigative report.12 Of the over three hundred coal-fired power plants that have violated regulations since 2004, 90 percent haven’t faced a single fine or sanction for doing so. After a plant in Masontown, Pennsylvania, violated the act thirty-three times in the three years between 2006 and 2009, it was fined a total of just $26,000 during a period when the parent company reported $1.1 billion in profits. The Environmental Protection Agency (EPA) has attempted to institute stricter controls, but regulators must swim upstream in a current flowing with tens of millions of coal industry dollars.

Cleaner options exist, but these too come with risks of their own. Wastewater facilities can extract many of the toxins and solids from scrubber waste and place them into landfills. These synthetically lined tombs can cover over a hundred acres each. But liners are not foolproof; they can burst or leak over time. According to a recent EPA report, residents living near some leaky landfills face cancer risks that exceed federal health standards by a factor of two thousand.13

Of coal’s almost countless effluents and side effects, the industry has notably reduced one: sulfur dioxide emissions. Some coal deposits naturally contain less sulfur, which converts to sulfur dioxide when burned, and modern coal facilities can remove most of the remaining sulfur dioxide from combustion fumes. Giving themselves a brisk pat on the back, coal promoters deployed public relations teams in the late 1990s to advertise their achievement in attempts to convince others that the reduction of this one pollutant could speak for the cleanliness of the entire industry. They correspondingly felt justified in christening their strategy “clean coal.”

Anyone would agree that preventing sulfur dioxide from entering the atmosphere is a good thing. However, using one of the dirtiest and most destructive practices on earth as a benchmark to judge a fuel “clean,” and then only reducing one of its many side effects, could certainly be interpreted as less than genuine. It’s like claiming to have done the dishes after just washing one fork.

Coal advocates had a big job ahead of them but they were well equipped. To start, the industry courted journalists to assure that coal’s self-appointed green credentials could achieve a level of credibility. They also began advertising. Unlike tobacco ads, which legislators have strictly limited due to public health concerns, advertisements for burning coal run freely. They’d probably run more frequently if we weren’t already addicted to the stuff. Public relations firms design these big-blue-sky ads for clean coal to leave us with a sense of “Huh, I guess that stuff isn’t so bad after all.” They are pervasive, persistent, and often successful.

Coal companies also deliver loads of money to political campaigns and rally their employees to support candidates who support coal. For this, they are greatly rewarded. The already very profitable coal industry receives tens of billions of dollars every year in government subsidies throughout the world. Why subsidize such a rich and dirty industry? Professor Mark Diesendorf at the University of New South Wales contends that politicians and governments, eager to stay in office, see opportunities in appealing to well-organized interest groups whose activities spread costs over many less-organized and less-informed constituents. 14 During the 2008 U.S. presidential election, all of the major candidates supported coal use by evoking “clean coal” technology. Incidentally, not one of them suggested that the nation could instead, or conjointly, aim to cut coal consumption to European levels by wasting less electricity.

Coal promoters now extend the term “clean coal” to include carbon dioxide sequestration, or more accurately, the mere promise that carbon sequestration could be employed on a large scale. It turns out this is quite a large promise. It centers on capturing carbon dioxide from coal plant exhaust and storing it underground, where proponents claim it will dissolve and eventually, after millennia, react with other elements in surrounding rocks to form harmless mineral deposits. If this sounds kind of dreamy, that’s because it still is. Proponents have shown that some underground carbon mitigation techniques can work on a small scale, but it’s less clear that they could be ramped up to generate appreciable effects on climate change. Many scientists hotly contest the very plausibility of such schemes. Beyond the cost, which both proponents and detractors agree would be extremely high, there are several other hurdles to jump before carbon sequestration could become a practical and widespread option for dealing with excess CO2. First, fossil-fuel plants must capture CO2 either from flue gases or during the chemical process employed in “integrated gasification combined cycle” (IGCC) facilities. Capturing and compressing carbon dioxide from flue gases requires so much extra energy and equipment that doing so adds 60 percent to the cost of the resulting electricity.15 IGCC carbon capture is easier, but these plants can only use certain types of coal; modifying one to run on lower-quality coal, such as Texas lignite, would add 37 percent to the cost of the plant and reduce efficiency by 24 percent.16

Second, after the industry captures and compresses the carbon dioxide, it must store the liquid or gaseous CO2. Ubiquitous saline aquifers are one storage option, but these are prone to seismic instability and uncertainties of storage life.17 Depleted oil and gas fields are obvious storage sites, but many of these deep underground crypts are structurally compromised after having been drained of their pressurized oil or inundated by multiple well piercings.18 If the U.S. coal industry captured and liquefied just 60 percent of its annual CO2 emissions, the effluent’s volume would equal the volume of oil that Americans consume over the same period.19 Geologists will be hard pressed to locate favorable storage sites on such a monumental scale. This may force the industry to risk even less secure formations.

This brings us to a third concern. Since economic factors will likely favor large stores over smaller ones, we must likewise consider the risk of large CO2 releases. A 1986 tragedy in the African Republic of Cameroon, where CO2 bubbled up from a volcanic crater and killed 1,800 people, portends what might ensue from such a release.20 Since carbon dioxide is heavier than air, the gas quickly formed a thick blanket over the landscape, thirty miles in diameter. Rescue crews arrived to complete silence— no laughter of children, no bird songs; thousands of dead cattle did not attract the buzz of even a single surviving fly. If ever realized, geosequestration sites will be prone to slow leaks, abrupt escapes, sabotage, and attacks.21

Ecosystem health is a fourth concern of pumping carbon dioxide where it doesn’t naturally occur. Mixed with water, CO2 partially dissolves to form carbonic acid. Excess soil and waterway acidification can harm microorganisms and, in turn, the species that rely on them for survival (including us). The planet’s oceans currently absorb about a million tons of carbon dioxide per hour, a third of the rate at which we produce the gas. Presently, this absorption is slowing climate change but is making naturally alkaline seawater more acidic. Ocean acidity endangers the shells and skeletons of sea life, in much the same way that carbonated drinks can soften and dissolve tooth enamel. Following current trends, parts of the oceans will become too acidic to host much of the life that lives there today.22 The Economist considers the risk forbidding enough to warn, “No corals, no sea urchins and no who-knows-what-else would be bad news indeed for the sea. Those who blithely factor oceanic uptake into the equations of what people can get away with when it comes to greenhouse-gas pollution should, perhaps, have second thoughts.”23

Scientists and legislators may ultimately decide that the risks and costs of carbon sequestration are worth it to reduce carbon dioxide buildup in the atmosphere. Even then, optimists suggest that carbon sequestration technologies won’t be ready for mainstream deployment for at least another twenty years. A lot of coal will have been burned by then.

Assuming that nations could muster the political will, technologies, and funding to develop carbon capture and storage, how effective would it be? A study group in Australia, one of the largest coal-producing nations, set out to answer this question. Their findings are humbling. They determined that the cumulative CO2 emissions reduction over the first thirty years of a sequestration program would be just 2.4 percent—not terribly impressive given the costs and risks that such an undertaking would involve.24

Why would the impact be so small? The decades-long wait for commercial viability is just part of the problem. The larger problem is regulatory. Coal firms aren’t proposing to close down heavily polluting plants in order to build ones that capture carbon dioxide, but rather to add new plants to existing capacity.25 In fact, the industry is using the very promise of carbon capture and sequestration to deflect calls to clean up their industry. For instance, Arch Coal chief Steven Leer touted carbon capture and sequestration over regulation in a recent St. Louis Post-Dispatch interview. When the journalist pointed out that these processes are twenty years away from becoming a reality, Leer conceded, “Probably.” He swiftly translated this twenty-year lag into something much less foreboding by dismissively stating, “Twenty years in the energy world is right now. If you think about the infrastructure needed for carbon capture and sequestration, twenty years is a very short period of time.”26 Yes, a short period of time considering the obstacles, but nevertheless a very long time to wait for a modest improvement in CO2 releases, especially, as we shall explore later, when there are far better options that we can deploy much sooner.

Image
Figure 7: Clean coal’s lackluster potential Even if aggressively deployed, carbon sequestration would have little impact on total emissions over the coming decades, according to an Australian study. The taller bars display CO2 emissions (in metric tons) of a business-as-usual trajectory for coal-intensive Australia. The slightly shorter bars indicate CO2 emissions assuming an aggressive carbon sequestration rollout. (Data from the Australia Institute)

If the process of carbon capture and storage will require decades of research, billions of dollars, risky uncertainties, and meager paybacks, why has it become such a central focus of energy policy today? One needn’t follow the path of money too far to discover the answer.

During the 2008 U.S. election cycle, the coal industry increased its budget for the National Mining Association, an industry lobbying group, by 20 percent, to $19.7 million.27 The industry maintained this pressure during the 2010 and 2012 election cycles.28 Candidates quickly lined up to accept their donations. Barack Obama’s election campaign in 2008 released the following pronouncement: “Carbon capture and storage technologies hold enormous potential to reduce our greenhouse gas emissions as we power our economy with domestically produced and secure energy. As a U.S. senator, Obama has worked tirelessly to ensure that clean coal technology becomes commercialized. An Obama administration will provide incentives to accelerate private-sector investment in commercial-scale zero-carbon coal facilities.”29 And their specific recommendation? Out of the thousands of American coal plants, they proposed to convert just five to ten plants over to capturing carbon. It is hardly worth calculating the effects of such a plan even if it were to be wildly successful. It could be better understood as a pilot project. In 2010 the administration initially set a target completion date of 2016 but later shifted to announce a verdict on carbon sequestration that might be best described as wishy-washy: “While there are no insurmountable technological, legal, institutional, regulatory or other barriers that prevent CCS [carbon capture and sequestration] from playing a role in reducing [greenhouse gas] emissions, early CCS projects face economic challenges related to climate policy uncertainty, first-of-a-kind technology risks, and the current high cost of CCS relative to other technologies.” 30 Not a glowing prediction but perhaps about the best the industry could have hoped for. Nevertheless, Obama’s Interior Secretary Ken Salazar opened massive stretches of public land to coal mining corporations in 2011, and Obama continued to support coal during his 2012 reelection bid. Not everyone, however, was as friendly with the coal industry as the president.

Community groups across the country stood up to protest coal power plant construction. In Kansas, the state legislature blocked two coal-fired power plants from being built, an obstruction that residents overwhelmingly supported. In retaliation, the industry launched an initiative to reeducate the public in an effort to provide cover for coal-supportive politicians. Coal and utility companies spent $35 million to fund a public relations organization to support coal-based electrical generation, combat legislation designed to reduce climate-changing emissions, and portray their industries as more responsible and concerned about environmental problems. The organization, Americans for Balanced Energy Choices, ran an advertisement claiming coal is “70 percent cleaner” than in previous eras. That’s somewhat true. But it failed to acknowledge that the coal industry didn’t initiate the transformation. In reality, tighter federal regulations forced the clean-up—regulations that the industry vehemently fought.31

Cleaner and more efficient coal plants can and should be part of our future. While they are more expensive than older designs, they are far cheaper than carbon sequestration and can be deployed more quickly. Just don’t expect the industry to take the initiative.

As for carbon sequestration? Some day it may indeed become a realistic option or perhaps even a reality. After all, most technologies we take for granted today started out as promises. But even if carbon sequestration does prove successful, will it make coal clean? What about resource limits, air pollution, water use, earthquakes, injuries, deaths, land degradation, and other negative side effects due to coal extraction and combustion practices? Keep in mind that the energy-intensive carbon sequestration process itself requires additional coal. The concept of clean coal directs our attention to just a couple of the many concerns about coal use. It is the promise itself that lends the term power. Clean coal ends up being a rhetorical cleaning more than a physical one.

The Rhetorical Cleaning of Coal

When I stand up to provide a critique of solar cells, wind turbines, and other alternative-energy technologies in front of student groups, foundation boards, philanthropists, and others, their keen senses occasionally signal that I might secretly work for the fossil-fuel industry. They are right to be wary, yet this chapter should help put those suspicions to rest. In reality, the alternative- energy project might be one of the best things ever to have fallen into the fossil-fuel industry’s lap. By drawing upon the symbolic power of alternative-energy schemes, these industries can paint their operations as clean and civically engaged. But more significantly, the promise of alternative energy entices concerned citizens to overlook extreme consumption patterns and instead frame energy problems as a lack of clean energy production. And as we have witnessed, fossil-fuel giants are all too eager to fill the outstanding order.

In the end, the question isn’t about whether or not clean coal exists. The more pressing question goes deeper: Why do so many Americans believe clean coal could exist?

The term “clean coal” was likely first uttered in a marketing boardroom. From there the coal industry’s public relation teams refined and expanded the concept, transforming it into an option. Media pundits argued for or against its existence, lending the term legitimacy. Politicians ran on the convenient platform and were obliged to direct funding to technologists who pursued it. The concept of clean coal sat upon a carefully manicured set of definitions, promises, and possibilities. It became a slick technological ideal that could morph over time to evade hostile advances. It represented different things to different people, all while maintaining a common façade by name—a façade that hid injustices that might have otherwise been exposed and addressed.

If citizens could be drawn to believe in one part of the story, then their partial beliefs could stand in for the whole concept. They could believe in clean coal. Perhaps this could only have occurred in a society that prioritized quick fixes and was primed to apply technological solutions to any given problem. No doubt, this is the path that countless successful inventions have followed. It’s just that humankind’s most meaningful undertakings are not, at their core, marketing illusions.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Green Illusions, by Ozzie Zehner

Postby admin » Tue May 12, 2020 9:48 am

7. Hydropower, Hybrids, and Other Hydras

The door to novelty is always slightly ajar: many pass it by with barely a glance, some peek inside but choose not to enter, others dash in and dash out again; while a few, drawn by curiosity, boredom, rebellion, or circumstance, venture in so deep or wander around in there so long that they can never find their way back out.

-- Tom Robbins, Villa Incognito


We’ve considered several alternative-energy novelties, a few imposters, and a rogue zombie, so it might seem there is little left to discuss. But there’s more to be sure—perhaps too much to cover fully in one book. So I will take a few pages here to briefly review a selection of remaining topics before moving on. These remainders are either not being publicly held up as solutions, offer only restricted geographic potential, or fall outside the core scope of this book. Still, I have chosen to briefly touch on a few that can each lend something distinctive to the greater picture. Let’s begin with an energy-production technology that has been largely forgotten.

Hydropower

As recently as 1950, hydropower fulfilled about a third of electrical demand in the United States, but growing energy consumption has eroded most of hydro’s value. Even though hydropower output has expanded since the fifties, it now serves just 5 percent of total U.S. electricity demand. Hydropower is still a monumental source of energy in other parts of the world—in Norway, for instance, dams high in the mountains quench virtually the entire electrical grid. Like wind and solar systems, dams generate electricity using a freely available and renewable resource. But unlike wind and solar systems, dams provide scalable supply whenever it is needed. And once built, dams provide inexpensive electrical power for a very long time.

Worldwide, hydropower provides 15 percent of electrical power. Some enthusiastic proponents claim that hydro could grow three-fold if the planet’s capacity were fully exploited.1 However, tapping into that capacity would displace many thousands of people, disrupt fishing industries, place neighborhoods at greater risk of flooding, and lead to a long list of other problems. Canadian filmmaker James Cameron joined hands with indigenous residents in Brazil to protest the construction of one such problematic dam. He compared the struggle of twenty-five thousand indigenous people who were to be displaced by the Belo Monte Dam to the struggle of the imaginary indigenous population in his blockbuster film Avatar: “There is no plan for where they go—they just get shoved out of the way. They were promised hearings by law—the hearings didn’t take place . . . and the process is not transparent to the public.” Cameron maintains that, “in fact, the public are being lied to.”

Brazil is rapidly developing and its economy is growing very fast and they are running out of power . . . so the government tells urban Brazilians, “You’re going to get power; we’re building a dam,” and so they shrug and say “Well, that’s a good idea.” Except, the power from the dam is not going to them—the dam is 1,500 miles away—the power is going to go to aluminum smelters. Aluminum smelters are incredibly energy intensive and they make very, very few jobs per megawatt so it’s really a bum deal. Plus, the profits go offshore.2


Controversy over hydropower expansion is the norm throughout the world, not the exception. Uzbekistan is alarmed by Tajikistan’s plan to build the highest dam in the world, which would take eighteen years to fill and leave little water for Uzbekistan’s cotton-growing region. Proposed dam projects are fueling disputes between Pakistan and India, a border already strung tight with nuclear tensions. In this region the 1960 Indus Waters Treaty is in danger of collapsing as India develops new forms of hydropower that were unforeseen at the treaty’s signing. In all, internationally shared rivers flow across the borders of 145 countries (the Congo, Nile, Rhine, and Niger are each split between nine to eleven countries). Global conflict risks alone are enough to bring into question the real potential for hydropower expansion. Still, there are other concerns. For instance, the Aswan High Dam, built across the Nile, generates enough electricity to power all of Egypt, but detractors blame it for polluting irrigation networks, invasions of water hyacinth, coastal erosion, and outbreaks of schistosomiasis, a remarkably gruesome parasitic disease. Silt can also reduce a dam’s output over time and many dams are hampered by poor site planning, maintenance troubles, or design flaws. Such limitations have plagued India’s hydropower industry. Even though hydropower capacity grew 4.4 percent per year between 1991 and 2005, hydroelectric generation actually declined.3

Given the drawbacks and limitations of dams, hydropower proponents are shifting their focus to microhydropower, wave power, and tidal power. These may make sense in many nooks of the world, but high cost and geographic constraints will likely restrict these energy sources to isolated markets.

With limited expansion potential, perhaps the best way to return dams to power is simply to waste less of the scarce energy we already obtain from them. For instance, instituting electrical pricing and efficiency strategies in the United States at levels already attained in Europe and Japan would effectively double the share of hydropower in the American grid from 5 percent to 10 percent without building a single additional dam. As we shall consider later, other strategies could elevate hydropower’s share even higher.

Geothermal

Geothermal systems capture warmth from the earth’s crust to produce heat or generate electrical power. Our planet’s internal nuclear reactions generate a lot of heat, but it is spread thinly across the globe, save for a scattering of hot fissures. The resulting average energy density at the earth’s surface is just 0.007 watts per square foot, or about fourteen thousand times less dense than energy from the sun.4 Small-scale geothermal systems draw upon this small amount of heat during winter months via a system of underground tubes filled with liquid. Some systems can reverse the process during the hot summer months to sink heat into the ground. Geothermal systems require electricity for pumping, but since they rely on the earth to do most of the heating and cooling, they ultimately consume much less power than a comparable furnace and air-conditioner combination.

The problem with such systems, beyond their high initial costs, is that they are only truly useful for buildings that are surrounded by large lots where the required tubing can be buried. Large lots in turn require a suburban-style infrastructure of roads, utility networks, and cars, which means that household geothermal systems are only slight improvements on what is a terrifically energy-intensive pattern of living overall. When builders cluster houses into walkable and bikeable neighborhoods or combine them into efficient multiunit urban apartment and condo buildings, geothermal systems become all but unworkable.

Larger geothermal plants hold more potential, but they only make economic sense in a few select locations where the planet’s crust is unusually hot or flows with heated springs. Large geothermal systems are ideal for communities in these locations, but hardly a solution for everyone else on the planet. If energy companies are willing to risk drilling deeper, engineered geothermal systems (EGS) can function almost anywhere on the planet. But they cause earthquakes. In fact, they require manufactured tremors to function properly. Basel, Switzerland, shuttered its pricey geothermal plant when it triggered a 3.4 magnitude earthquake in 2006. Furthermore, these operations can churn up underground radioactive compounds such as radium–226 and radium–228, producing radioactive water and steam within geothermal facilities and leaving behind solid radioactive waste.5 And even red-hot geothermal zones can degrade over time, sometimes unpredictably.

Taken together, these restrictions will limit the rise of geothermal power. Even large geothermal corporate players agree. For instance, Dita Bronicki, CEO of the geothermal company Ormat, admits “Geothermal is never going to produce 10 percent of the world’s electricity, but the way we look at it is that if it reaches 2 percent in 20 to 50 years, then that is a lot.”6 As with hydropower, the best way to increase the share of geothermal power may be to use the resulting energy to do more.

Cold Fusion

When two light atomic nuclei fuse into one, they release a great deal of heat. Such reactions power the sun. Nevertheless, facilitating this reaction on a smaller scale at a lower temperature to capture the power would be an impressive feat. In fact, there’s no plausible explanation for electrochemical cold fusion within the existing laws of physics, but that’s only one limitation of this energy generation scheme, and perhaps not the most problematic.

The larger hurdle facing cold-fusion researchers is economic. Cold-fusion research funding dried up following embarrassing hoax discoveries in the field, which led remaining researchers to rebrand their work as “low-energy nuclear reactions.” The small lingering stream of cold-fusion funding now trickles down to laboratories outside the mainstream. There are a couple more plausible high-temperature fusion proposals—hydrogen bombs work on this principle—but efforts to scale them down for utilities are dreadfully far from commercialization. An international collaborative team, named ITER, originally planned to complete a prototype reactor core in 2016, but the project is running over budget and its organizers have pushed back the deadline multiple times. Once completed, scientists plan to take twenty years to study ITER’s fusion behavior, safety requirements, material characteristics, and related issues before venturing to build a functioning power plant.7

Fusion comes with a myth, too, if somewhat tedious: There’s no need to worry about energy consumption or its side effects since fusion will eventually deliver plentiful energy without the headaches. Fusion proponents cough up various time frames for their optimistic scenarios, which usually pool around the thirty-year mark. Unfortunately, expectations for a fusion-powered planet have been thirty years away for quite some time. First in the 1950s, then in the 1960s, the 1970s, the 1980s, the 1990s, the 2000s, and well . . . today fusion power is still about thirty years away, as its most staunch supporters will affirm with a straight face. So as a general rule, whenever anyone attempts to defend the existing energy establishment by playing the fusion card, you might choose to disregard anything that thereafter happens to fall out of their mouth.

Fusion may be an idea worth researching further, but it is most certainly not a basis for public energy policy today. Anyone who suggests that it is, probably has something up their sleeve. If by some stroke of luck fusion should ever prove to become the cheap and available energy source its proponents insist it will become, you are invited to pull this book from your shelf and drop it in the trash—along with the rest of your nonfiction books. A world with inexpensive and scalable fusion would initiate a reset on every level of human and environmental interaction (along with some unintended consequences to be sure). But for now, please carry on.

Concentrating Solar Photovoltaic

Concentrating solar photovoltaic systems employ lenses to focus broader swathes of radiant energy onto solar cells. It’s a lot cheaper to build a plastic lens than a larger solar cell. However, concentrating solar systems only work with direct sunlight— they can’t take advantage of diffused rays on cloudy days. Also, because lenses force the solar cells to work harder and hotter, they don’t last as long. So photovoltaic replacement costs largely offset the benefits. Solar advocates argue over whether concentrating solar strategies are a bit better or a bit worse than standard solar photovoltaics. Either way, “a bit” likely won’t be enough to propel these systems into larger roles.

Solar Thermal

High-temperature solar thermal systems employ mirrors to superheat oil, salts, or steam for electrical generation. Of all the solar energy systems, solar thermal best accommodates grid demand because the hot fluids remain hot when clouds pass over or even into the night if stored in reservoirs. And since these plants ultimately run on steam-driven turbines, engineers can easily integrate fossil-fuel backup to pick up the slack on cloudy days. Generous government subsidies are fueling their growth and they arguably hold greater power generation potential than solar photovoltaics. Still, ecologists warn of the dangers massive mirror and lens arrays pose to desert ecosystems. And solar thermal power plants generally consume millions of gallons of water annually, a big problem anywhere, but one that is especially agonizing in deserts. Finally, their utility drops significantly outside hot and arid locales since they require expensive transmission lines to get their energy to market.

Other solar thermal strategies hold widespread potential, but these are frequently overlooked in the fanfare surrounding high-tech solar devices. As mentioned earlier, the most basic solar thermal strategies use dark tubing and simple optical devices to capture the sun’s rays to heat water—coiled swimming pool heaters work through this principle. These boring heaters are cost-effective and generally advantageous accompaniments to standard hot-water heaters and boilers. And solar hot water can also power thermal coolers to chill buildings during the summer.8

Heat pumps are another option to draw upon the sun’s energy. Essentially, they concentrate the tiny bit of warmth in cool air to heat buildings. Like liquid thermal coolers, they can also work in reverse during the summer. These thermal solar strategies can start to take on characteristics of efficiency strategies, which we will consider later.

Natural Gas

The natural gas industry has rather successfully reframed and marketed its product as a bridge fuel, in effect placing itself in a role that is both necessary and environmentally legitimate. But a bridge between what exactly? Between a “dirty” fossil-fuel present and a gleaming alternative-energy tomorrow? Is natural gas a bridge to nowhere?

Even though natural gas produces less CO2 when burned than other fossil fuels, its extraction operations induce corresponding harms. For instance, one of the largest reserves of natural gas in the United States lies under the Marcellus Shale watershed, which provides drinking water to more than fifteen million people. Until recently, analysts considered these gas deposits unrecoverable since they are thinly distributed in tiny pockets of impermeable rock a mile beneath the earth’s surface in a formation that extends from the northern edge of Tennessee to Syracuse, New York.9 However, a newer technique called “fracking” is changing that. Energy firms combine underground horizontal drilling with high-pressure chemical slurries to fracture rock formations, thus releasing the entombed gas. This geologically violent process leaves behind copious amounts of injection effluent and permanently alters geological formations surrounding one of the nation’s most prized fresh-water resources. Fracking also releases enough methane to more than cancel out any greenhouse-gas benefits that natural gas promoters so aggressively tout.

Many locals are peeved. An upstate New York group of investigators sifted through the Department of Environmental Conservation’s database of hazardous-substance spills from the last three decades and found that natural gas operations also bring a variety of toxic chemicals and petroleum compounds to the surface. These effluents are often, incidentally, radioactive.10 Walter Hang, president of the group, recounts one story from the investigation:

A Vietnam vet living in Candor, New York, had discovered that, even though he had lived in the same house since 1962, his water started to release gas, and he discovered that you could light it. . . . He complained to the Department of Environmental Conservation. . . . The incredibly shameful thing is that the Department of Environmental Conservation did not even come to look at this situation. They simply told this disabled vet, Mr. Mayer, “Don’t drink the water.” And that was it.11


Meanwhile, Hang himself points out that the Finger Lakes region of western New York is facing difficult choices:

These communities are just desperate for jobs. And so, it sounds so good: we’re going to get this gas out, we’re going to make tons of money, communities are going to benefit, the state of New York is going to benefit. . . . What happens when hundreds and hundreds of these hundred-thousand-ton trucks start pounding these structurally deficient bridges that have been neglected for decades into pieces? Who’s going to pay for that? What about the roadways that are going to get destroyed? What are we going to do with all of this toxic wastewater? . . . You have these upland reservoirs, hundreds of miles away from the city, and the water flows completely under gravity through these giant tunnels. It’s so pure it doesn’t even need to be filtered. And so, this is a jewel. Any city in the world would give anything to have this water. That’s why it has to be safeguarded. It has to be protected. Once it’s polluted, then the city would have to treat that water at gargantuan cost.12


As with clean coal, natural gas can only seem clean if we first sharply narrow our focus (to one CO2 output metric) and then proceed to disregard absolutely every other well-established and documented side effect, limitation, and long-term risk of deploying the fuel.

Image
Illustration 8: Flaring tap A still from the documentary film Gasland shows tap water bursting into flames when lit, allegedly following natural-gas hydrofrac extraction operations near this home. (Still courtesy of Josh Fox)

Hybrids and Electric Cars

My first appearance on national television, twenty years ago, was followed by an unexpected realization. The CNN segment featured a hybrid car that I had designed and built. The small two-seater hybrid (a plug-in electric and natural-gas hybrid with regenerative braking) used far less fossil fuel than its contemporaries. It was also scaled for city driving, measuring just an inch longer than a golf cart. I thought it was an especially beneficial solution to our environmental challenges. I was wrong.

First of all, what counts as an alternative-energy vehicle and what doesn’t is hardly a straightforward reckoning. Such definitions are social contrivances, adeptly evolved to serve a variety of purposes. For instance, is an electric car a true alternative if its drivetrain is ultimately powered by coal, nuclear power, and lithium mines rather than petroleum?13 Perhaps, but it’s not a clear-cut calculation. According to Richard Pike, chief of the UK Royal Society of Chemistry, fully adopting electric cars would only reduce Britain’s CO2 emissions by 2 percent due to the country’s electric utility fuel mix.14 When we start to exchange one set of side effects for another, the exchange rates become confusing. This opens a space for pr firms, news pundits, environmentalists, and others to step in and define the terms of exchange to their liking.

For instance, during the hype for the launch of its Volt electric vehicle, General Motors claimed that customers could fill up for the cost of less than four cents per mile.15 The California utility company pg&e came to a similar conclusion.16 So did the doe’s National Renewable Energy Laboratory.17 These calculations assume a retail electricity rate of ten cents per kilowatt-hour, which is roughly the national average. In academic research, government reports, environmental reports, and journalistic accounts of electric vehicles, these two persuasive figures run the show. Four cents per mile. Ten cents per kilowatt-hour. Given the apparent thrift of these two figures, it’s stunning that electric vehicles hadn’t caught on earlier. Was it a Big Oil conspiracy? Did someone kill the electric car? Or do these two honest-looking figures have something they’re not telling the jury?

In reality, when analysts use these two celebrated numbers to calculate the fuel costs of an electric vehicle they arrive rather quickly at one remarkable problem. If car buyers intend to drive their electric car farther than the extension cord from their garage extends, they won’t be able to take advantage of that cheap ten-cent electricity. They’ll have to rely on a battery—a battery they can only recharge a finite number of times before it must be replaced, at considerable expense. The battery step, not the “fuel” step, is the expensive part of driving an electric vehicle.

We don’t usually think about the storage costs of fuel in our cars because the gas tank is comparatively inexpensive. We can recharge it over and over (to varying degrees) and it generally outlives the life of the car. Not so for batteries. Batteries are many times more expensive than the electricity required to fill them. The better the battery, the more expensive it is. A sixty-watt-hour laptop battery with a seven-hundred-recharge lifespan that costs $130 to replace will ultimately cost about $3 per kilowatt-hour to operate—so expensive that the ten-cent-per-kilowatt- hour “fuel” cost to charge it becomes negligible.18 And even though electric vehicles are moving to cheaper batteries, the costs of exhuming their required minerals extend far beyond simple dollars and cents.

Environmentalists generally stand against battery-powered devices and for good reason: batteries require mined minerals, involve manufacturing processes that leak toxins into local ecosystems, and leave behind an even worse trail of side effects upon disposal. Though when it comes to the largest mass-produced battery-powered electrical gadget ever created—the electric car—mainstream environmental groups cannot jump from their seats fast enough to applaud it.19

Even at current battery-production levels, mining activities draw fire from local environmental and human rights organizations that are on the ground to witness the worst of the atrocities. An analysis by the National Research Council concludes that the environmental damage stemming from grid-dependent hybrids and electric vehicles will be greater than that of traditional gasoline-driven cars until at least 2030, given expected technological advances.20 But even if mining companies clean up their operations (which at least will require much stricter international regulations) and engineers increase battery storage capacity (which they will, very slowly) there is still a bigger problem looming on the horizon: Alternative-fuel vehicles stand to define and spread patterns of “sustainable living” that cannot be easily sustained without cars. Cars enable people to spread out into patterns of suburban development, which induces ecological consequences beyond the side effects of the vehicle itself.

Even the most efficient hybrid or electric cars can’t resolve the larger ecological impacts of sprawl. In fact, their green badges of honor might even help them fuel it. For a time, this may not pose a problem, but eventually it will. Over time, a car’s odometer may be more environmentally telling than its fuel gauge.21 Sprawl has positive and negative effects on Americans, but its intensification is clearly at odds with the long-term ideals of the environmental movement. The future to which alternative-vehicle proponents aspire may place more people at risk of resource and energy volatility swings in the future. Alternative-fuel vehicles up the bets in an already heated game of energy threats dealt by what sociologist Ulrich Beck calls our “risk society.22

In cities, alternative-fuel vehicles bring some distinct environmental benefits—less noise and smog for instance. But if these vehicles become less expensive, as their proponents claim they will, then people who once relied on cycling or walking might purchase them—potentially negating some or all of the presumed environmental benefits. Furthermore, battery-powered vehicles tend to be heavy. Drivers generally state they feel safer in a larger, heavier car, and sometimes (but not always) they are. However, heavy vehicles place pedestrians, bicyclists, and those in lighter vehicles in greater danger. Therefore, the risk of injury or death has not been reduced, but rather transferred to others.

Upon closer inspection, the benefits of green automobiles start to appear synonymous with the benefits of smoking low-tar cigarettes. Both seem healthy only when compared to something framed as being worse. And on some level, I suspect people are tuning in to this. Even with all of the hype surrounding hybrid and electric vehicles, these machines are becoming somewhat of a cliché in some circles. In my experience, even those who most willingly embrace alternative vehicles still smell something funny in the air as they drive by—a kind of ephemeral suspicion that’s hard to pin down. Hybrid and electric vehicles may offer partial solutions within certain contexts, but those contexts are frightfully limited. Nevertheless, marketers will continue to flash their green credentials in order to sell car culture to a greater number of people worldwide.

It isn’t acceptable for doctors to promote low-tar cigarettes. Should environmentalists promote alternatively fueled automobiles? What about alternative fossil fuels, nuclear power, or alternative energy more broadly?

We might expect mainstream environmental groups, with their concerns about resource scarcity, to identify alternative-fuel vehicles as a drawn-out delay before the inevitable crunch— a crunch that would presumably be far worse if even more livelihoods were placed at risk over the intervening years. But they don’t. In fact, there is little critical engagement with the whole array of dubious low-tar energy options perched on the wall behind the cashier.

Why?

That’s the intriguing question to which we shall now turn.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Next

Return to Science

Who is online

Users browsing this forum: No registered users and 2 guests