The Future of Technological Civilization: Edward Woodhouse | Economics | Nature | Avaliação gratuita de 30 dias | Scribd
Você está na página 1de 231

The Future of Technological Civilization

2013 Revised Edition

Edward Woodhouse


Section I. Challenges Facing Technological Civilization

Chapter 1. Introduction: Progress and Its Problems..... 1 Chapter 2. First Challenge: Unintended Consequences .... 17 Chapter 3. Second Challenge: Fairness.......... 32 Chapter 4. Third Challenge: Innovation Too Slow................................................ 42 Chapter 5. Fourth Challenge: Innovation Too Rapid......... 53 Section II. Innovations for Improved Technological Steering Chapter 6. Plunging Ahead versus Intelligent Trial and Error .............. 67 Chapter 7. Inducing Business Executives to Better Serve Citizens... 78 Chapter 8. Strategies for Steering Business II.........90 Chapter 9. Political Innovation: The Potential Intelligence of Democracy. 100 Chapter 10. Real Technological Democracy?......................................... 115 Section III. Technical Professionals' Public Responsibilities Chapter 11. Engineers and Overconsumption by the Affluent.... 126 Chapter 12. Nanoscience and the Privileged Position of Science....... 140 Chapter 13. How Technoscientists Can Promote Fairness..... 162 Section IV. Envisioning a Commendable Future Chapter 14. No Innovation without Representation?: Human Enhancement 174 Chapter 15. Thinking Carefully about Military Innovation....188 Chapter 16. Technology, Work, Leisure, and a Satisfying Life..195 Chapter 17. Conclusion: Envisioning a Wiser, Fairer Technofuture..........204


This text is dedicated in part to the several thousand (mostly) hard-working students who have taken Science, Technology, and Society over the past several decades. I especially appreciated the contributions from the dozen or so students each semester who were brave enough to ask good questions or make insightful comments, helping to turn a large, overheated lecture hall into a somewhat friendlier, more interactive environment that was at least intermittently conducive to genuine inquiry. A few students each year actually had more knowledge than I did about a particular topic, and the information you shared enriched both my own understanding and that of your fellow class members. And from time to time, a few of you told me I was doing a good job of provoking you to think. Of course there always were a number of students who sat passively as I lectured perhaps thoughtfully taking it all in, perhaps functioning as good students waiting to be spoon-fed the answers for the test, or perhaps daydreaming about sex, drugs, and rock and roll. There also usually was a heckler who argued repeatedly with me, typically without much visible evolution of his thinking (or mine). Both the silent majority and the disputatious loners, along with the few students who occasionally fell asleep during a typical semester, ended up making more of a contribution than I initially would have realized: I had to try harder to keep your attention -- by using more audiovisual aids, by telling better stories, and by striving to reach you in other ways. Naturally I found some of those actions and reactions more pleasant and helpful than others. But I'm truly grateful to all who shared a portion of your mental and other energies with me and with the other instructors in Science, Technology, and Society. I know that courses at a technological university can be grueling, especially when one is ill, worried about a job interview, having problems with a friend or family member, or simply overwhelmed being 18-21 years old.

Oddly enough, the Registrar-prescribed, 50-minute lecture format also helped by forcing me to distill and organize my thoughts, to decide what could be left out and what was too important to skimp on. Having prepared the lectures, I was more prepared to write this text. I hope that having the ideas available to you in written form will relieve students from having to take as many notes. Moreover, knowing that you have the basics in the text, I can feel less pressure to jam everything into the lecture, allowing more time for back-and-forth discussion, videos, discussion of relevant current events, and other learning exercises that many students find more helpful and more enjoyable than straight lecture. I will be revising the text at least once more before its formal publication, and I will appreciate whatever feedback you care to give to help improve the book. Some of the ideas no doubt could be presented more clearly but I may not realize where the deficiencies are unless you tell me. Or you may know of a better example than one I have used. Or you may catch a factual error. You may find some chapters simply uninteresting, and may nudge me to drop or rewrite them. And there probably are a few places where my own beliefs come across too strongly, and I should rewrite the passages to make sure that I am conveying knowledge more than giving opinion (although the two activities can never be entirely separated). In closing, I also want to thank the thirty or more graduate teaching assistants who have helped craft assignments, suggest useful readings, convey students' reactions, and point out concepts and other course materials needing clarification. Your relatively youthful enthusiasm for teaching has kept me on my toes, helped me become a better teacher, and given me renewed inspiration at times when I began to think that teaching anybody anything is virtually impossible. This book is dedicated to you, too. Edward Woodhouse STS Department, Rensselaer Polytechnic Institute January 2013

Chapter 1. Introduction


breakthroughs in scientific understanding, near-miraculous

technological developments, and unprecedented increases in wealth have characterized your lifetime and mine. Much of what is now taken for granted would have seemed amazing or even magical to our not-so-distant ancestors. A century ago, electric lights and telephones were coming into widespread use, Ford was beginning to mass produce automobiles, and the machine gun was ready for slaughter in the trenches of World War I. By the late 1940s, an era that now seems a bit quaint, hundreds of millions worldwide already were benefiting from technologically mediated communication, transport, manufacturing, medicine, and cuisine that exceeded what kings and nobles of earlier eras could have hoped for. Some of the innovations since then are mundane and some are life transforming: Snowboarding on machine-made snow, Facebook and streaming video, treatments for childhood leukemia and erectile dysfunction, air conditioning everywhere, copious quantities and flavors of ice cream, ... the list is almost endless. People in China, India, Indonesia, Brazil, and other parts of the planet are attempting to catch up with those in Europe, the U.S., and Japan, with approximately one billion entering the global middle class during this generation. Nor does the pace or variety of innovation seem to be slowing, with epochal changes on the horizon via robotics, human genetic enhancement, 3D printing, synthetic biology, nanotechnology, and more.

These trajectories may be disrupted by bioterrorism, by climate change from greenhouse gases, by meltdown of the global financial system, by wars bred of ethnic or religious hatred or competition over scarce resources. Meanwhile, however, the scientific reports are fascinating, the comforts highly enjoyable, and the biomedicine sometimes life-saving. The best spirit of global understanding can feel incredibly uplifting as people in diverse cultures discover that other humans are not so different from themselves. I hope and believe that thoughtful, determined, and cooperative people can figure out how to continue to innovate both technologically and socially. This would require building on the best of the past, correcting the worst errors, and heading off future technologically enabled catastrophes. It might also mean sharing the benefits of technoscience more fairly with more of humanity. Tempering my optimism is the fact that daunting challenges lie ahead for your generation. It is sobering to realize, for example, that about as many people now live in poverty as the total number of humans who were alive in 1900. Millions of children grow up with stunted brains from lack of protein at crucial developmental stages; others become blind from lack of a dollar's worth of vitamin A. And perhaps a billion adults suffer from preventable or curable illnesses, including diabetes deriving partly from the epidemic of obesity. Psychological illnesses including depression and bipolar disorder have escalated in the past two generations, especially among those who are relatively affluent. Almost every college student knows another teenager or family member who suffers from emotional-chemical disorders, addictions, or other psychological challenges. Alzheimer's and other forms of dementia no doubt are being diagnosed more accurately these days, but the incidence is shockingly high and greatly diminishes the quality of life not only for the immediate victims but also for family and friends. Some knowledgeable geologists, urban planners, and other relevant

observers project that shortages of water could be one of the most serious problems of this century. Drought has recurrently affected regions as disparate as the parched lands of the Sahel in Africa to the ten-year shortage of rainfall in the American west that has induced Las Vegas municipal officials to drill access lines deeper into a shrinking Lake Mead beneath the Hoover Dam. Altogether too many people now live on lands considered arid (or genuine desert), or as too saline for the brackish local water to be drinkable. Anyone who has lived through a drought can appreciate that mild deprivations such as restrictions on watering lawns, washing cars, or hooking new houses up to city water supplies could easily turn nasty if water shortages worsen. Climatologists' projections may prove incorrect, but the overwhelming majority now expect substantial climate change from growing concentrations of carbon dioxide, methane, and other greenhouse gases in the upper atmosphere. Five of the ten warmest years on record worldwide have occurred in the past decade, with 2012 tentatively estimated as the warmest ever. Thus far, this is no more than a minor inconvenience in temperate climates for those who can afford the electricity for air conditioning. But most types of electricity generation adds more carbon to the atmosphere, making the problem worse in the long run. Several billion people live in warm climates within 30 degrees of the equator, and electricity supply already tends to be both unreliable and too costly for quite a few. A shift of five degrees in mean temperature could bring catastrophic changes in farming, in whether those with vulnerable health can survive, and even in whether schools can function during the hottest seasons of the year. "Anthropogenic" (human-originated) releases of greenhouse gases are certain to increase as affluence spreads, with 1200 new coal-fired electric power plants scheduled for construction. In addition to melting in the Arctic and in Antarctica, effects are expected to include rising ocean temperatures endangering already precarious coral reefs and sea life populations, wind currents shifting the areas affected by tropical storms, and rising waters

flooding coastal lands. With roughly half of humanity living within a hundred miles of the ocean, a worst-case scenario is almost impossible to contemplate. To mitigate or head off the worst, technoscientists have for more than thirty years been proposing various "earth engineering" schemes. The earliest was to use jumbo jets to deposit a million tons of sulfur dioxide in the stratosphere to reflect incoming solar radiation. Observers capable of arithmetic pointed out that the world's entire fleet of jumbo jets would not be enough to do the job and others believed that SO2 would block heat from the earth's surface radiating back into space, thereby negating whatever good effect the plan might otherwise achieve. Then came the idea of dumping millions of tons of iron filings into the oceans. Iron is a crucial nutrient for plankton, the tiny plants consumed by whales, and the excess iron could stimulate enormous plankton "blooms" to remove and sequester carbon from the atmosphere as do terrestrial green plants. Oceanographers and ecologists reacted with horror to the possible secondary and tertiary effects on complex food chains, but U.S. law does not cover actions by non-U.S.-flagged vessels on the high seas, and the international Law of the Oceans does not cover iron, a natural substance. Other earth engineering schemes are now afoot, including using deep wells to sequester carbon. Biotechnologists are attempting to genetically modify algae and enzymes to convert CO2 to ordinary carbon (which is easily stored), and even fancier efforts are underway in hopes of making it feasible to keep burning fossil fuels without changing the atmosphere. Still others speak seriously of using robotic and biological machinery to terraform Mars, making it into a world that could support life as a backup plan in case humans ruins the Earth. Most with relevant scientific expertise dismiss the idea, but who knows for sure? Altogether, then, your generation faces a mishmash of fascinating, helpful, and otherwise splendid technoscientifically enabled actualities and

potentials. But these intertwine with far darker prospects. If none of the worst scenarios comes to pass, it will be due either to great good luck or to a Great Transition in which technoscientists, government officials, business executives, journalists, social thinkers, and ordinary citizens begin to evolve the capacities required to operate an intelligent planetary civilization. No one has a definitive list of what would be required for this, but I hope that one component of it will be a deeper concern about social justice. I do not expect those advantaged by the present system to suddenly start sharing everything equally with everyone else. But some of the most affluent Bill Gates, George Soros, Warren Buffett already are donating tens of billions of dollars, trying to help the poorest humans raise themselves to a decent standard of living. It may be improbable, but I think there is at least a chance that such leadership by example together with humans' innate capacity for empathy may gradually evolve into an ongoing global conversation regarding what constitutes a fair sharing of the benefits of technoscience. Likewise already underway, albeit moving slowly, are inquiries into what it would take to detoxify the planet and otherwise innovate environmentally to protect fragile ecosystems and safeguard endangered species. Not many probably care much about the several thousand species of insects and other lower life forms that ecologists estimate to be going extinct each year. But a great many people value whales, lions, elephants, mountain gorillas, and other "charismatic megafauna;" and protecting these splendid creatures requires figuring out what can be done to prevent a repetition in your era of the damages inflicted by 20th-century wars, pesticides, roads, construction, mineral extraction, agribusiness, and other habitat-disrupting activities. There is a slowly developing recognition that the world is a complicated, interdependent ecosystem; whether those in authority will act expeditiously on that insight is doubtful, but its impossibility is not a foregone conclusion. Another angle on big problems and possibilities can be put very negatively:

How could so much technoscientific progress have failed to produce a happier world? Or, to put it positively, starting from here, how can technoscientific successes be better translated into people's lived experiences? Among other challenges, this would involve reducing military violence while improving negotiated problem solving; reducing physical and emotional abuse of women and children; reducing psychological depression while increasing genuine happiness; building communities that don't merely have convenient malls but actually feel good to live in. One of my greatest concerns is that a lot of young people are just giving up on such ideals, figuring that the world is a lost cause and it's hopeless to do anything more than look after oneself and one's loved ones. It seems to me a lot more satisfying to ask: How can one pursue a meaningful, interesting life while assisting others to do the same? These and other big issues might be framed in many different ways, but I find it helpful to focus on a handful of simple questions such as these: 1. What positive goals for innovation? 2. How to reduce unintended consequences from scientific inquiry and technological innovation? 3. Who should get how much of what? 4. How rapidly to pursue which innovations? 5. Who should decide? 6. What institutions ought to be improved or created to enable wiser, fairer steering of technoscience and its uses? At a higher level of generality, this book asks: What would be required to guide science and technology toward better fulfilling more humans needs more of the time? Every chapter adds a piece to the story. Unlike most textbooks, this one is deliberately partisan -- meaning that I do not try to be "neutral." I have ideas to advocate. Some readers will perceive that as biased, but wouldn't it be silly or even irresponsible to pretend neutrality while analyzing current predicaments and future possibilities of a technological civilization with upwards of seven billion lives at stake?

Given that the status quo is biased toward the goals of the affluent and powerful, moreover, how would it be possible to have an even-handed discussion unless someone plays devils advocate by presenting a vision of how things might be different? Of course there is an obligation to get facts straight and to use good logic; but the future of technological civilization is not a subject conducive to handling via equations of the sort that engineering and science students are used to seeing. The facts in the book are difficult to contest, but the interpretations certainly are open to dispute. In fact, you might say that is their purpose to open minds for inquiry, debate, and growth by providing thought-provoking perspectives. But your assessments of what is going well or poorly, and why, may differ from mine, and you do not automatically have to believe (or even pretend to believe) everything I say just because this is a textbook. I also inquire into ways of building upon what is wondrous about science and helpful about technology. If enough people wanted to steer technological civilization more wisely, what innovations might be worth considering in the social institutions that steer technoscience business, consumer, government, university? It is awfully difficult to know, because the status quo is so influential in shaping actual behavior and even in shaping habits of thought. Only by considering big changes is one likely to be able to see the status quo clearly, and thus be able to stop merely assuming and start really thinking about what is worth keeping and what requires improvement. So I aim to be provocative, in order to induce you to take a fresh look at how the technoscientific and technosocial worlds now operate. I readily admit that some of my ideas go against the grain of widely believed and deeply held notions regarding "progress," and it is appropriate for you to hold onto your own ideas unless you genuinely become convinced to modify your thinking. Disagreeing at times, looking at angles

that another has missed or underemphasized, and finding contradictions are all facets of actually thinking, and for college students that activity may be more important than the particular words between the covers of a book. With that caveat, let me now briefly preview the remainder of the book.

The inquiry starts in chapter 2 with perhaps the most basic challenge that any innovation or other action can encounter: unintended consequences. Even simple choices by a single person can have surprising and significant outcomes one in three college students become dissatisfied and change either the college they have chosen, or their major, or both. Think of the divorce rate. Think of the fraction of workers who stop feeling challenged by their jobs and become unhappy in their careers. None of these people deliberately sought such outcomes, but they got them anyway. If individual choice is fraught with risks of unwanted-unintended consequences, how much more likely are surprises when developing and introducing a complex technological innovation? No one person controls it, so no matter what "the inventor" intends (if there really is such a person, which is rare these days), once an innovation is released into the world other people modify, add, and sometimes grossly distort whatever the original goals may have been. R. J. Gatling, inventor of the Gatling Gun that morphed into the machine gun, explained how during the U.S. Civil War I witnessed almost daily the departure of troops to the front and the return of the wounded, sick, and dead. Most of the latter lost their lives, not in battle, but by sickness and exposure incident to the service. It occurred to me if I could invent a machine a gun which could by its rapidity of fire, enable one man to do as much battle duty as a hundred, that it would, to a great extent, supersede the necessity of large armies, and consequently, exposure to battle and disease be greatly diminished.

Gatling was sincere, just incredibly naive. Prior to his invention, there were about as many deaths from combat suffered by the losing side in the entire four years of the U.S. Civil War (72,500) as the number of British deaths on the first day of the Battle of Verdun in World War I. Altogether, Gatling and others helped make the 20th century the bloodiest in history, with north of 100 million people dying directly or indirectly in violent conflicts. It could be ten times that number in this century if Pakistan and India get into a nuclear exchange. Or if a terrorist with the right training utilizes a recent article (in the prestigious scientific journal Nature) explaining how supposedly responsible biological scientists deliberately re-created the Spanish Flu virus that killed 50 million people in a worldwide pandemic 1918-1920. These are among the dramatic threats, but there are other emerging technoscientific potentials that conceivably could be used to change life beyond all recognition. That could be good, of course; or it could be terrible depending on which intended and unintended outcomes emerge from the large set of possibilities. Biomedical and genetic technologies for human enhancement enabling high-quality living well past 100 years old? Sounds good unless the old-timers hang onto the good jobs, money, and power, thereby keeping young people from taking their rightful places. Never-before-seen levels of unemployment from robotics and other automation? Terrible new weapons bordering on those featured in "Star Wars"? Synthetic biology and nanotechnologies leading to.....who knows what? While hoping for the best, members of an intelligent, caring civilization would take very seriously the risks of innovating in ways that could end up toward the negative end of the spectrum. A first task for better steering of technological innovation, therefore, is to figure out how to cut down on the frequency and severity of unwanted-unintended consequences. Chapter 3 introduces a second big challenge, that of fairness: Who deserves what kinds of access to which technological benefits? Illustrating the potency of the question are two simple technologies taken for granted by

everyone you and I know: clean water and safe sanitation. Yet even in 2013, for several hundred million people in the world's poorest cities, water and sanitation are extraordinary daily challenges. The solutions are so elementary in terms of the relevant science and technology that it may be questionable whether to include the topic in a book like this one. However, a great virtue of thinking about water and sanitation is that it brings front and center the social aspects of technoscience. All the scientific understanding and engineering expertise in the world does not make for a better life until technology is translated into a form that is usable for those who need it. No topic puts this more in one's face than the elementary metabolic and excretory needs: Water-borne diseases take as much of a health toll as heart attacks; the unceasing toil of collecting water puts a terrible physical burden on the very people who have least caloric energies to spare; and the discomfort, disease, and embarrassment accompanying lack of access to safe sanitation goes to the heart of what it means to be a civilized human. There are myriad other facets of the issue of (un)fairness, some of which are discussed later in the book. Chapter 4 examines a third challenge, potential technological innovations that are much needed, but that are delayed unduly. That may seem impossible, given the myriad technical potentials coupled with the diversity of business people seeking to make a buck. To put the point more positively, large numbers of highly trained and well funded scientists, engineers, biomedical researchers, IT innovators, and others are pursuing exciting scientific research and technological design. On first inspection, therefore, it seems unlikely that many innovations could be missed for very long. It turns out that the situation is more complex. Consumers of course routinely get pomegranate-flavored lip gloss, styling updates on clothing, and a few thousand other new products each year; computing and consumer electronics march ahead, or march somewhere with several hundred thousand new "apps" emerging in the past few years. But certain types of important niches fail to be filled year after year, decade after decade.


Chapter 4 explains why economic innovators sometimes fail to act as one plausibly could expect and why neither governments nor technologists make up for the omissions of businesses in the transport, energy, chemical, and other sectors. Chapter 5 analyzes a fourth grand challenge: Controlling innovations that proceed too rapidly. Partly because of the likelihood and potential severity of unintended consequences, overly rapid change is a prescription for trouble. What incentives do innovators now have? is it primarily a strong impetus to use the accelerator, with little knowledge or motivation to find and use the brakes? For many readers, this will be the first occasion for thinking about the possibility that synthetic biology, life extension, 3D Printing, robotics, or other fascinating technologies may be moving at a pace incompatible with humans' limited cognitive and organizational capacities for learning and adaptation. I suspect that some of you may find uncongenial the idea of deliberately slowing excessive pace. And many will doubt that it can be done given all the momentum. Overall, then, the first section of the book poses a set of four challenges facing those who would steer technoscience and its utilization more wisely and fairly. Accelerating innovations that are proceeding too slowly, slowing innovations that are proceeding too rapidly, fairly distributing the benefits of technoscience, and guarding against severe unintended consequences are daunting tasks. Succeeding at these challenges would require social innovations to perform the steering, as discussed in Section II. Chapter 6 offers the most comprehensive "answer" found anywhere in the text regarding how to bring unintended consequences, the pace of change, and other challenges under more intelligent control. Intelligent Trial and Error is a system aimed at learning to cope with uncertainty and disagreement, two of the most central, bedeviling facts of life. When technologically mediated problems arise, it often is because the world lacks deliberate procedures for thinking through young technologies before they


are unleashed on the world. Any system of governance worthy of the name would have to adopt protections something like those prescribed in Intelligent Trial and Error. Chapters 7 and 8 discuss the main mechanism now used for making decisions about what new technologies to introduce, how many of each item to manufacture, and what the health, environmental, and social side effects will be. That mechanism of course is the business corporation and those who purchase goods and services. Corporate executives enjoy extraordinary discretion in deciding what to bring to market, and what not to bring to market. Cost, labor contracts, government regulations, consumer expectations, and other constraints mean that business does not have a free hand; but executives enjoy tremendous latitude and authority. If outsiders cannot win a greater measure of influence and begin channeling business executives' behaviors, then technosocial outcomes are pretty much bound to remain a mixture of marvelously beneficial, silly, and outright destructive. These chapters offer simple proposals for incentivizing high-level business executives to innovate in ways that take public needs into much better account. I seek to preserve the profit motive and most of the other liberties of market-oriented economies; but I am confident this can be done without letting businesses and purchasers run amok. I do not like the dangers they unnecessarily create most recently the Great Recession from new financial technologies too complex for anyone to understand, coupled with shoddy lending and investment practices at banks and stock brokers. And I do not like the number of important innovations that businesses and consumers fail to pursue. I outline in these chapters a set of economic reforms that could induce business executives voluntarily to serve public purposes because they could make a good profit from the change in focus. The cures are actually not hard to figure out; the main obstacle lies in the legacy thinking that leaves many minds unable/unwilling to imagine and enact simple innovations in economic life.


Chapters 9 and 10 make an equivalent foray into politics and government. I realize that many people are skeptical about "politicians," and with good reason. But giving up on government would be like giving up on families because some parents are abusive or incompetent, or because many spouses have not learned to live together lovingly. A better stance is to work on improving marital and parental competence, because giving up on the family system as a whole probably is not an option. The same goes for government: Without effective government tax incentives, protections for intellectual property, subsidies to universities and students, and legal regulations that hold businesses accountable, technological innovation could not proceed as well as it does. Proceeding better will require improvement in electoral politics and government. More concretely, though, what would "better" government mean? And what might one seek from government to improve the intelligence of technological steering? I will not give away the whole story here, but one of the ideas is to re-introduce a practice first employed in Venice 500 years ago: Operate elections in multiple stages, two of which involve random selection among a reasonably large number of viable candidates. This makes it far more difficult for moneyed interests to exercise disproportionate influence they would not know whose campaign to fund thereby making it easier for ordinary people who are more representative of the citizenry to gain political office. The next set of chapters considers engineering ethics and technical professionals' social responsibilities more generally. Barriers to fulfilling legitimate responsibilities appear to be high. Chapter 11, "Engineers and Overconsumption by the Affluent," analyzes trends among the billion persons with most spending power, trends toward bigger houses, filled with more stuff, that has traveled over longer distances, with greater energy costs and more environmental damage, while producing envy among those who have less. Most of this would have been impossible without engineers' contributions, but does that mean that engineers have any individual or


collective responsibility for the outcomes? Or are they merely employees who must do as they are instructed? If there is anything engineers might or should be doing differently, what are the implications for engineering education and for professional associations of engineers? Chapter 12 examines nanoscience and nanotechnology. Whereas most of the book is about stuff in the world and the engineers who help put it there, about the tangible machines and electronics and chemicals and other products that consumers actually touch. But this chapter puts the focus more on the inquirers, the scientists. Those who wear the label "scientist" often get off rather lightly for their shortcomings, for the dangerous capacities they help bring into being, and for their failures to help give birth to other needed changes such as chemical greening. Valorizing science may seem to make good sense; after all, who could be against more knowledge? It is not the knowledge that causes problems, it's what done with it right? Well....there is some plausibility to the notion, but not enough, as will become apparent. University faculty such as professors of chemistry, professors of materials science working on nanoscience are crucial links in the chain of capacities that engineers and industries utilize to make both wonderful and terrible new products and services. Chapter 13 asks, "How Can Technoscientists Do More to Promote Fairness?" One usually thinks of fairness as a quality of individuals: You are a fair-minded person, whereas she is not. But technoscientists' roles may induce some to produce unfair outcomes even if their personal dispositions would be in sympathy with have-nots. If your job is to design yachts, for an extreme example, at least on the job you are not going to be doing much to help the homeless. Except maybe a rich beach bum. And if your job as a university researcher is to obtain grants and publish articles, maybe you don't really have much latitude to do "needed" research if that is not where grant funding is available. Section IV, "Envisioning a Commendable Future," brings the book to a


conclusion. The cruel fact is that no one now alive fully understands how to steer a technological civilization as wisely, fairly, and gently as vulnerable humans and the planet under their care deserve. But all the questions posed in chapter 1 will have been answered by the end of this section, though some of the answers are more creative and appealing than others. I hope you will find the conclusions at least plausible, and perhaps some of the ideas will stimulate you to fresh visions of a technological civilization much better than any glimpsed thus far. Chapter 14 uses the case of human enhancement techniques to ask whether it would be desirable to adopt an ethic of No Innovation Without Representation? If new scientific understandings and technical capacities are a force for changing the conditions of human life, which humans deserve to help make the choices? Is there adequate justification for not letting everyone be represented before authorizing an attempt by some to turn themselves and a few others into superhumans? Chapter 15, on military innovation, questions the logic now used to justify high levels of weaponry innovation as a crucial component in defense against attack from potential enemies. I first try to bring to the surface the thinking that now resides somewhere in the recesses of many people's minds, and I then present reason to doubt each of the six elements in the conventional train of logic. If you accept my reasoning, you will hereafter be less confident of the wisdom of massive military R&D. Rather than being an essential or even viable means to important ends, does continuous weaponry innovation begin to look more like a habitual response based on mislearning key lessons of history? Whatever readers end up believing about the issue, the chapter may raise higher on your mental landscapes concerns about the pace and direction of innovation oriented toward violence. Given the trillions now expended annually on weaponry and organized militaries, even a partial inflection of the trajectory could free up huge sums of money as well as the energies of experts, armaments experts, and ordinary soldiers. If fewer people had to worry about becoming victims


of violence, my guess is that technological civilization would be a more hopeful place to live. Chapter 16 asks, "What Kind of Life Brings Satisfaction?" The primary emphasis is on work versus leisure, but the larger issue concerns how technoscientific capacities ought to be directed so as to assist more humans in living more satisfying lives. Without positive visions to promote this end, all the analysis in this book cannot amount to much. Even if every problem pointed out in the book is valid; even if every proposed solution would be helpful; and even if everyone agreed about all that, technosocial life still could remain impoverished if there are not ongoing conversations and gradually improving ideas about the kinds of lives that technoscientific capacities ought to be used to help promote. What kind of life would you want for yourself and for everyone else? Chapter 17 summarizes the book's main points, while offering a modest challenge to young scientists and engineers. If science, technology, and society intertwine inextricably in a sociotechnical system, doesn't it become apparent that technoscientific innovation without political, economic, and cultural innovation is probably going to lead to a mess? That technoscientific changes induce social changes, and vice versa, no one can reasonably doubt. Why, then, would anyone resist the idea that all parts of the sociotechnical system need to be updated periodically and that R&D on social innovation may be every bit as important as technoscientific R&D? To hope otherwise, to hope that technoscientific change somehow will magically allow people to avoid dealing with the messiness and conflict inherent in political and economic change, is wishful thinking; and continuing the imbalance between technoscientific and social innovation is a sure way to undermine the best possibilities for humanity's future. The question for me is whether highly intelligent and highly advantaged technoscientists will join in challenging the present imbalance, and join in advocating for political and economic reforms that could make the best of the technosocial possibilities while avoiding the worst?


Chapter 2. First Challenge: Unintended Consequences

Everyone knows that actions often have effects that were not anticipated. Considerable uncertainty is unavoidable in life, for it rarely is possible to fully predict what will eventuate from new behaviors such as going to college, having a family, or innovating technologically. Uncertainty and surprise often are enjoyable, adding spice to life: Who wants to know the ending of the murder mystery before completing the book or viewing the film? Who besides young children enjoys playing tic-tac-toe with the same outcome over and over? Who does not enjoy opening gifts? So there is nothing inherently wrong with uncertainty and surprise. However, not all surprises are created equal. Some surprises are very positive, some are very negative, and many are a mixture of welcome and unwelcome elements. Unintended outcomes of course vary greatly in severity and in the number of people affected. If I get a minor case of food poisoning from eating undercooked chicken at a restaurant, it is unpleasant for me but of little consequence to others. In sharp contrast, the failure to design nuclear reactors to withstand tsunamis turned out to be devastating for several million people around Fukushima, Japan, in 2011.


Reducing the frequency and severity of unintended consequences therefore would be a high priority for anyone interested in systematically improving the ratio of benefits to harms from technological innovation and use.

Beginning to Think About Unintended Consequences

The more complex the undertaking, the greater the likelihood of surprise, because more factors can interact in unforeseen ways. But even ordinary behaviors such as walking into a convenience store while a robbery is in progress can sometimes bring terrible consequences. And very complex endeavors such as the first Moon landing occasionally work out pretty much as planned. So there is a strong correlation between complexity and uncertainty/surprise, but it is not a 1:1 relationship. The idea of things unexpectedly going wrong is part of everyday culture, as in the clichd notion of Murphy's Law: Whatever can go wrong will go wrong. During the Second World War, unknown members of the U.S. Army made up the word "snafu" to describe life-as-usual in the military. The word is an acronym in which each letter stands for one word in a longer phrase: Snafu = situation normal, all fucked up. Everyone knows that soldiers are sent on dangerous missions, but few may realize how frequently negative outcomes occur unintentionally. Troops in Vietnam actually had to ask their families to send cleaning supplies to unjam heavy rifles designed for long-distance sharp shooting instead of for jungle firefights in high heat and humidity. Landmines in Iraq killed and maimed soldiers who would have been better off if military procurement officers had ordered Hummers, Jeeps, and other vehicles with steel-reinforced, V-shaped under-armor designed to dissipate and protect against explosive effects. There is such widespread familiarity with the notion of snafu or Murphy's Law that everyone ought to be attuned to the likelihood of endeavors going awry. Yet many facets of sociotechnical life proceed as if those in authority


expect everything to work out fine. Time after time, they appear not to anticipate unanticipated consequences -- for they often fail to arrange in advance to mitigate the likelihood, severity, or number of undesirable consequences. U.S. military officials still are not giving soldiers returning from combat sufficient psychological support to head off long-term posttraumatic stress disorder -- despite the fact that what once was known as "shell shock" has been recognized for a century and understood for decades. Automobile design offers a wealth of examples of failure to protect against unintended consequences. People run out of fuel and freeze to death; or, driving through Death Valley in the summer, they die of heat exhaustion. Anyone with half a brain knows that a certain fraction of drivers are going to run out of fuel. Yes, there are warning lights in most contemporary vehicles. But what about that fraction of drivers who find a gas station unexpectedly closed, or farther away than anticipated? Or those who are distracted by death of a loved one, or breakup of a relationship, or health problems, or screaming children in the back seat? Those realities could impair any driver's attention to fuel supply. Might one expect that automotive engineers and manufacturers would anticipate on a statistical basis what some fraction of individual drivers will not foresee? Fuel supply is only one of the reasons an internal combustion engine may stop operating, and of course it is impossible to prepare for every automotive contingency. But for engine problems of all kinds there is a dead-simple fix: a back-up engine with a different power source. When the main engine will not run, a separate engine can limp to a repair station or at least to a safe place to phone for help. That may initially seem an extreme measure, but what if the same fix had other benefits? Oh, yes, we could call it a hybrid vehicle! That is not the advertised function of hybrid cars, of course, and many are (foolishly?) not set up for fully independent electric operation. But there are


millions of calls per year to the American Automobile Association and other roadside services worldwide, and a nontrivial fraction could be avoided via the dual-engine solution. If just 20 percent of automotive emergencies were addressable by the hybrid approach, five to ten million drivers and passengers per year could reduce or avoid the inconvenience, expense, suffering, and medical problems including death that their breakdowns now occasion. Available statistics are crude and subject to interpretation, but counting deaths at $10 million each, roadside assistance and inconvenience at $200 per incident, and intermediate cases at $10,000 per injury, that might add up to roughly $500 billion that could have been saved over the course of the past century (hybrids became available in 1898). Giving priority to unintended consequences and designing with them in mind thus could sometimes or often lead to rethinking and improved design.

The Law of Unintended Consequences

Guarding better against a wider array of unwanted consequences that are readily imaginable obviously would be an important component of improved technological steering. However, a more intellectually challenging task is that of preparing to cope better with consequences that truly cannot be anticipated satisfactorily. Chapter 6 deals with that issue in some detail, and the emphasis in this chapter is mainly on unintended consequences per se rather than on what should and could be done about them. Economists are among the best at attending to what they sometimes refer to as The law of unintended consequences. At least as far back as Adam Smith's 1744 landmark book, The Wealth of Nations, it has been apparent that surprisingly good outcomes can emerge unintentionally in market economies. For example, suppose that demand for airline seats is down, and airline executives declare a fare sale. College students who had been preparing for a boring week at home over Spring Break instead find


themselves able to afford a beach vacation; some have a wild time, meet the woman or man of their dreams, and forever remember the experience. Meanwhile, jobs are created for airline staff, rental car agents, restaurant employees, hotel staff, and others who serve vacationers. Travelers have little awareness of these side effects, which are byproducts of buyers and sellers engaging in market exchange. More complex outcomes emerge in a similar way. As costs increase for tracking and disposing of hazardous chemical wastes, business executives become interested in finding ways to reduce expenses. "Green" entrepreneurs then find it easier to sell their less toxic alternatives; venture capitalists perceive the shift in business trends and provide funds for new green companies; and newly graduating chemists, chemical engineers, and MBAs find employment opportunities that previously did not exist. Other business executives see that they are being left behind, and begin seeking a share of the green market. Consumers, in turn, see more products touting environmental purity (often exaggerated by the marketers), and gradually become attuned. Over time, in the happiest scenario, there is an automatic shift toward less- or nontoxic chemical products. Adam Smith referred to these positive byproducts of actions by self-interested consumers and businesses as "the invisible hand" of the market: In some respects economic life can operate as if a benevolent mastermind were planning things out. Regrettably, there also is an invisible foot that produces unexpected bad outcomes. Thus when purchasing agents at teen clothing stores see business slowing down, they cut the number of orders placed with textile manufacturers, who then delay new investments and lay off workers, who then can afford to purchase less themselves. This leads to another round of business cutbacks, higher unemployment, and declines in consumer spending. This vicious cycle is known as a recession; when it gets bad enough, it turns into a depression.


Except for extreme "free market" advocates, few economists, government officials, or citizens are enthusiastic about waiting for the economy to self-correct over the long run. For, as the 20th-century economist John Maynard Keynes famously remarked, "In the long run, we're all dead." Hence most people support "government intervention" to reduce the severity of economic downturns. In fact, important parts of contemporary economics are devoted to studying how to act preventively to head off serious recessions. This has to do with bank lending and the overall money supply, government stimulus spending, encouraging business investment via tax policy and government purchases, supporting scientific and basic technological research, and other means of "keeping the economy healthy." Most of that is beyond the scope of this book. The takeaway points applicable to technosocial matters are three simple ones. 1. Both positive and negative unintended consequences are substantial in every realm of life. 2. While welcoming the positive ones most people consider it unwise to passively accept the negative. 3. Affluent societies make enormous investments in economic expertise, data collection, institutions such as the Federal Reserve, and innumerable pro-business policies in order to reduce the severity of unintended negative economic consequences. Oddly, there is nothing like an equivalent investment in understanding or heading off the negative unintended consequences of technological innovation. Much of the remainder of this book pursues that theme, inquiring what might be done to cope better with uncertainties and unwanted effects.

Toxicities in Consumer Products


Rachel Carsons Silent Spring (1962) is credited with revolutionizing public understanding of the hazards posed by pesticides and other industrial chemicals. During the past half century, scientific and public knowledge of health and environmental effects has improved enormously, with even casual observers now encountering a steady stream of media stories regarding hazards from industrial chemicals emitted during the manufacture, use, and disposal of consumer products. Among the documented risks are cancer and birth defects; impaired respiratory, nervous, and immune systems; and disruption of the bodys hormonal balance and reproductive capacities. On reflection, however, it is not clear how thoroughly most people have grasped that information or its implications. Some especially severe hazards have been eliminated or curtailed, such as those from Dieldrin and other persistent pesticides. Yet global chemical production now is over a hundred million tons annually, and toxicities actually reach more broadly into the fabric of daily life than in Carsons era. Roughly a million products contain diverse combinations of 80,000 chemicals; additional substances such as dioxins are generated during production, use, disposal, and degradation of the original constituents. Most of these chemicals have never been assessed toxicologically, and understanding the combined effects of even a few dozen synthetic organic chemicals within a living organism is beyond scientific capacities. Nor have governments or businesses yet systematically mapped the flow of chemicals in and around consumer products. Comprehensive understanding thus is unavailable; fortunately, it is easy to learn about enough illustrative products to make some sense of the toxics predicament. For example, polyvinyl chloride (PVC), commonly known as vinyl, is a plastic ubiquitous in homes, schools, and businesses. With 30 million tons produced annually, PVC is one of the most common synthetic materials in the world. It is cheap and neither rusts like metal nor decays


like wood. Most drinking water now moves through PVC pipe, and the plastic is used for everything from food wrap to patio furniture, doors, window frames, and house siding. Manufacturing PVC involves toxicologically nasty chemicals, including ethylene dichloride, a known carcinogen and suspected neurotoxin. In addition to routine releases during manufacturing, toxic constituents are emitted during industrial accidents and fires, the largest of which to date burned 400 tons of PVC. Several million fires occur annually in homes, restaurants, and other businesses around the world -- and people even burn PVC meat wrappers on barbeque grills. Highly toxic chlorinated dioxin is formed as a byproduct during manufacture, and it becomes an unintentional contaminant in PVC cling film and other consumer products. More than 120,000 tons of lead, a potent neurotoxin, are used annually to improve the plastics durability, with some of the lead released into the household as window blinds and other PVC products age. Chemical constituents of PVC leach into living organisms from medical plastics and from water pipes. Disposal of trimmings and discarded PVC products via incineration generates dioxins and other chlorinated byproducts. The flexibility of many PVC products comes from additives known as plasticizers. Phthalates have long been added to PVC to enhance the flexibility of nipples for baby bottles, chewable baby toys, and other items from rain coats and vinyl flooring to garden hoses and electrical wiring. These substances tend not to remain entirely within the plastic, because they are not chemically bonded to the PVC polymer but are merely mixed into the plastic during its formulation. Phthalates evaporate, creating the familiar smell one associates with a new car; and the chemical leaches into air and water. Phthalates also are used as ingredients in cosmetics and toiletries, including aftershaves, deodorants, skin creams, hair preparations, nail polishes, and fragrances and in


household floor polishes, adhesives, caulks, and paints. Phthalates are suspected carcinogens, disrupt the endocrine (hormonal) system, and are believed to interfere with mammalian reproductive systems across generations. Approximately five million tons of phthalates are produced annually, most of which goes into PVC. Phthalates are ubiquitous in household air, human body fluids, and global environment, with especially high levels in the blood of children using pacifiers and teething rings. What ought one make of the fact that many parents who would not let children eat food off the floor have allowed them to chew on toxic toys? Why have many pediatricians failed to communicate effectively with parents regarding such chronic risks? The manufacturers and their chemists may be primarily responsible, but they are now taking phthalates out of baby products because of the backlash from parents, and businesses would modify other products if enough people complained. So it is worth wondering, "Why do so few complain about so many chemical threats?" Is something wrong with the relevant research, its communication and uptake, or with people's good sense? Methylene chloride was introduced in the mid 20th century as a replacement for more flammable solvents. Some 200 million pounds are used annually in the U.S., principally in paint removers and industrial adhesives, in manufacturing pharmaceuticals and urethane foam, and as a cleaning agent for fabricated metal parts. The chemical also functions as a solvent for extracting caffeine from coffee. The fine print on bags of Starbucks coffee beans for a number of years said: If this bag contains decaffeinated coffee other than Decaf Komodo Dragon Blend, it was decaffeinated with methylene chloride. The U.S. Food and Drug Administration long ago banned the chemical in hairspray and cosmetics -- but still allows it for treating coffee. Tested samples of coffee have residual methylene chloride levels well below the maximum allowed 10 parts per million. Given that no lower limit has been demonstrated for


carcinogenic potential, however, as Michael Jacobson, of the Center for Science in the Public Interest, once put it, ''It's an insane policy to allow the use of an unnecessary and acknowledged carcinogen. Alternative ways to decaffeinate coffee include ethyl acetate, a chemical naturally occurring in apples, and a Swiss process using nothing but water. Another toxics issue involving food is that of the pesticide methyl bromide. The odorless, colorless gas has been used for decades to fumigate soil prior to planting tomatoes, peppers, and strawberries, and the food processing industry has used methyl bromide to keep rodents and insects from contaminating cookies, crackers, pasta, chips, spices, herbs, cocoa, powdered milk, and coffee beans. Not surprisingly, residual amounts remain on the food. Toxicology results led California's environmental regulatory agency to label the chemical as a developmental toxicant that may cause birth defects. The Montreal Protocol of 1991 listed methyl bromide as contributing to depletion of the ozone layer, and the U.S. Environmental Protection Agency eventually banned it in 2005 under the Clean Air Act. China has phased ou the fumigant, but it can still be used in the U.S. if the Environmental Protection Agency grants a critical use exemption." There are quite a few restrictions, but strawberry growers, orchards, and pet food manufacturing are among those still allowed to use the chemical. Flame retardants are chemicals intended to inhibit combustibility of textiles and upholstered furniture, construction materials, electronic circuit board resins, and the plastic casings for coffee makers, fax machines, computers, and vacuum cleaners. For example, polyurethane foam itself toxic in some formulations is too flammable to be used safely in upholstered furniture unless treated with a flame retardant. And many electronic devices constitute inherent fire hazards unless treated. Environmental organizations have pushed for safer flame retardants, and a number of U.S. states and other nations have partially responded; but there


is little public awareness of the issue despite numerous stories in the mass media as well as fictional accounts depicting death by asphyxiation -- such as one in a 1990 crime novel featuring thick, acrid smoke the sort of smoke produced by synthetic stuffing in cheap furniture. Given that fire is so cognitively vivid, one might expect people to be quite concerned about the subject; that they are not raises questions: Does this constitute an implicit trust in manufacturers and in government regulators, trust at odds with the widespread skepticism toward these institutions reflected in opinion surveys? Or is it manifesting resignation? Or obliviousness? Another way to think about the inattention is to consider the learning environment in which consumers operate. Whereas it is easy to observe that a product works fairly well for the intended purpose, it is more difficult to learn about the products unintended secondary and tertiary consequences. Thus, many cooks have direct positive experience of Teflon-coated cookware, recognizing that food sticks less and that pans are easier to clean. To purchase and use such a product is much simpler than learning that manufacture of Teflon involves PFOA, a perfluorinated chemical that is broadly toxic, does not break down in the environment, and may actually be persistent over geologic time scales. It pollutes human blood around the globe. Levels of perflourinated chemicals inside North American homes are about 100 times higher than found outdoors, volatilizing from carpeting and other household products. PFOAs also are created from decomposition of stain-resistant coatings for carpeting and couches, released during popping microwave-popcorn, off-gas from polishes and paints, and even are in fast-food wrappers. After being transported by global air currents, PFOAs and related chemicals are found in increasing concentrations in Arctic wildlife. Manufacturers began learning of potential health effects in 1961, but workers exposed to PFOAs were not ordered to wear respirators until 1980.


After another twenty years, executives at 3M became sufficiently concerned about health effects to cease production of PFOAs, whereas other manufacturers waited until pressured by EPA to begin a phaseout in 2006. Accumulating scientific evidence helped drive the change, as did lawsuits forcing DuPont to reveal internal documents showing that the company had violated the law by failing to disclose what it knew about the risks. The company has been assessed more than $100 million in damages, and faces additional liabilities. The chemicals discussed above are a small sample, of course, but the pattern is clear enough to suggest that households might be interpreted to be low-level hazardous waste dumps. Nor is the home the only site where toxic consumer products do their damage, of course, for toxic emissions also occur during mining, agriculture, manufacture, transport, retailing, and waste disposal.

A Final Example
Unintended consequences of technologies in daily life go well beyond the problem of chemical toxicity. In Stuff: The Secret Lives of Everyday Things, John Ryan and Alan Durning analyze where T-shirts, coffee, and other ordinary consumer products come from. Affluent consumerism, they show, is enabled by chains of mining, agriculture, transport, and manufacturing that reach all over the planet. The activities and impacts are mostly hidden, occurring in distant nations, rural areas, and fenced-off industrial sites. Stuff tells a tale about French fries that is based on solid research but conveyed as a composite, semi-fictionalized picture of the sequence of events leading to a single order of fries at a fast food restaurant. The authors begin with the paper container holding the 90 fries; it was "made of bleached pine pulp from an Arkansas mill." The fries were made from a russet Burbank potato, "grown on one-half square foot of sandy soil in the upper Snake River valley of Idaho....(Russets) were selected in the early


sixties by McDonalds and other fast-food chains because they make good fries. They stay stiff after cooking." During the growing season of five months, 7.5 gallons of water were applied to the potatos half-foot plot. About 4/5 of U.S. French fries originate in that area of the country, extensive irrigation creating a downstream portion of
the riverbed (that) is bone-dry much of the year. Eighty percent of the Snakes original streamside, or riparian, habitat is gone, most of it replaced by reservoirs and irrigation canals. Dams have stopped 99 percent of salmon from running up the Snake River, and sturgeon are gone from all but three stretches.

Potatoes are treated with fertilizers and pesticides, which together comprise 38 percent of a farmers expenses. So much nitrogen from the fertilizer and other agricultural contamination leaches into groundwater that it becomes unfit even for irrigation. Pesticides washing into streams include Telone II and Sevin XLR Plus, each toxic to fish, birds, or mammals. A diesel-powered harvester dug up my potato, which was trucked to a processing plant nearby. Half the potatos weight, mostly water, was lost in processing. The remainder was potato parts, which the processing plant sold as cattle feed. Processing my potato created two-thirds of a gallon of (somewhat contaminated) waste-water....sprayed on a field outside the plant... and the water sank underground. Freezing the sliced potatoes obviously requires electricity, available from a nearby hydroelectric dam on the Snake River. Whereas more than 90 percent of the potatoes Americans ate were fresh a half century ago, more than two-thirds now are frozen, mostly in the form of fries. Frozen foods can use up to ten times more energy than the equivalent fresh food.
My fries were frozen using hydrofluorocarbon coolants, which have replaced the chlorofluorocarbons (CFCs) that harm the ozone layer. Some coolants escaped from the plant. They rose 10 miles up, into the


stratosphere, where they depleted no ozone, but they did trap heat, contributing to the greenhouse effect. A refrigerated 18-wheeler brought my fries to Seattle. They were fried in corn oil from Nebraska, sprinkled with salt mined in Louisiana, and served with ketchup made in Pittsburgh of Florida tomatoes. My ketchup came in four annoyingly small aluminum and plastic pouches from Ohio.

If an order of French fries entails that diversity of consequences, technological civilization overall might be described as a "cascading series of unintended consequences," as Richard Sclove puts it. Figuring out how to curb unwanted side effects of technosocial actions such as the rise of fast food thus is a tough, tough issue. No complete fix is possible, but it would be feasible to do a lot better if enough people and their organizations became willing to inquire diligently into supply chains and to revise how innovations are introduced, monitored, and modified. That there is not more such inquiry and demand for change is one of the puzzles to which the book will return again and again. Is it fair to say that most people do not care about the effects of their actions? Or is it more accurate to say that they do not know, despite all the publicity? Or are they fatalistic, believing there is nothing they can do? All these and other hypotheses have some plausibility. Opinions will differ regarding how far to go in worrying about specific uses of particular chemicals, but there is a case to be made, as noted at the outset of the chapter, that Rachel Carsons 1962 story has not been truly grasped. Most consumers are quiescent regarding the toxicities discussed in this chapter. I can understand not worrying all that much about some of the minor problems; but I am astounded that so few seem concerned about carcinogenic and mutagenic effects on themselves and their children right in their own homes. Are they incurious, overwhelmed, fatalistic, or...? For whatever reason, the more or less willing cooperation obviously assists


manufacturers who want to continue selling toxic products, and consumers thus participate as part-victim and part-cause of technological momentum. And what ought one make of the fact that many of the problematic chemical products came into widespread use after Rachel Carsons widely publicized warnings in 1962? Have people decided that toxification is simply a price that must be paid for "progress," as some people say with a macabre shrug or a matter-of-fact willingness to make tradeoffs? Or is "decided" the wrong verb is there an element of technological somnambulism at work in which things are allowed to happen while the general public engages in consumer daydreams rather than in information gathering, debate, and decision making that deserves the label "choice"? If large-scale toxification were technologically essential to meet consumer needs/wants, then many people might be prepared to make a Faustian bargain. Is it actually the case that such wicked tradeoffs are unavoidable, or could many modern conveniences have been obtained with radically less use of toxic compounds? Knowledgeable advocates of Green Chemistry say that it has long been possible to make chemicals less toxic by changing synthesis pathways, by using biocatalysis and less hazardous solvents in manufacturing, and in many other ways. In investigating their claims, I have found that Brown Chemistry continues more because of technological momentum than because of the laws of Chemistry. Hardly anyone knows much of the story, which raises fundamental questions about how decisions have been made and are being made about chemicals. And are toxic chemicals a special case, or are they one manifestation of wider problems in the steering of scientific inquiry and technological innovation?




deserves what access to which technological benefits? Present economic arrangements provide a clear, simple answer: Those who are able and willing to pay are the ones who get the earliest and best access to most new products and services. There are important exceptions, of course, such as items paid for by governments and therefore enjoyed without direct charge parks, sidewalks, crosswalks, stop signs, police, fire protection, and in some countries medical care. But these are exceptions to the buyer-benefits rule. Is the rule a good one, or would members of a fairer technological civilization use different criteria for deciding who gets what? This chapter will not attempt to answer that very large question; the purpose here is merely to begin an inquiry into fairness by discussing two interrelated technologies that are so taken for granted, and in some respects so elementary, that they do not even come to mind when one uses the word technology: clean drinking water and safe sanitation. Is it all right to deploy civil engineering and related expertise primarily in affluent countries, or do technical professionals have an obligation to make their skills available globally? Should the goods and services enabled by technoscience depend on each family's ability to pay, or ought a technological civilization arrange for toilets and drinkable water even for those too poor to pay? The answers

Govind Gopakumar was the lead author on an earlier version of this case, which was published online by the Center for the Study of Complex Systems, RPI, 2004.


to these questions depend partly on one's values, but defensible answers also require knowing something about the facts of the matter: How bad is the situation? And one needs a logical framework to interpret the facts. Let's look first at some facts, and then consider how to interpret them. First, it is worth knowing that the United Nations considers safe drinking water and sanitation to be a significant problem. When todays college students were young children, the UN set a goal To halve by 2015 the proportion of people without sustainable access to safe drinking water and basic sanitation. This milestone was outlined in "Millennium Development Goals of the United Nations," and it is worth noting how modest the aspiration was -- not eliminating unsafe drinking water and sanitation, just reducing it by half. That suggests the problems must be widespread and difficult to solve, or the diplomats probably would have spoken more ambitiously; after all, politicians tend to over-promise.

How serious are the deprivations in accessing water? One of the worst situations is found in Mumbai, India, where approximately five million people live in shantytowns covering virtually every bit of otherwise vacant land within the huge city. The film "Slum Dog Millionaire" showed some of that reality, but only a tiny slice. Built of scrap lumber, pounded tin cans, and even cardboard, living arrangements you or I would find utterly unacceptable came into being as impoverished people from rural areas migrated to the city in hopes of a better living. Instead, what many of them now call home would not qualify as housing in the affluent world partly because no water is piped to such dwellings, nor do they have toilets. How do they obtain water? In rural areas, women and girls may walk as much as five miles carrying water jugs on their heads. Along with gathering wood


for fuel, the onerous task of fetching water requires so much time and energy that many families keep girls out of school altogether because their labor is essential for the family to survive. In poor cities, the walk is shorter but the task is onerous in a different sense: people must stand in line at public water fountains -- often waiting an hour or longer for a turn. Or they can purchase water from street vendors at prices a poor person can ill afford to pay. The more fortunate ones have a kind neighbor or relative living in a house connected to city water supplies. The deplorable situation in Mumbai and other cities arose because of rapid population growth, and because the city governments limited financial resources rendered it incapable of paying for needed upgrades to the water supply system. With demand increasing by more than twenty percent a decade, city officials began rationing water by supplying neighborhoods on a rotating basis meaning that water flows only at certain times of day. Such a distribution scheme may appear to inconvenience everyone about equally, but the administrators actually direct water disproportionately toward affluent neighborhoods whose residents have political clout. Moreover, affluent households can store water in large tanks to provide uninterrupted supply, whereas squatters can gather only enough water to meet immediate needs. Hence, the poor are unbuffered from the caprices of the supply system, and they must wait at the public standpipe even if water is distributed only in the middle of the night. Collecting water therefore is a daily process filled with mental and physical burdens unimaginable for those of us who have modern facilities. Consider the case of a squatter settlement, where the closest water source is two football fields away across a set of railroad tracks. The water flows not out of a real faucet, but from pipes running parallel to a gutter -- at ground level. Collecting water that way obviously is a long and tedious task. Combined with the weight of the pots and the distance a woman must walk to carry the water home, filling one or two pots a day is all that most can manage.


The quality of the water also is an issue. In one neighborhood that is not atypical, For bathing and washing utensils, people use an old well.The gutter runs close to the well, and often overflows into it, as does the dirty water from people's washed clothes, vessels and when they bathe. If this seems unbelievable, you might wish to consult a report (available online) prepared by a not-for-profit organization, titled Waiting for Water: The Experience of Poor Communities in Bombay.

Even worse is the situation for sanitation: A single communal toilet may be shared among as many as a thousand people producing an environment so filthy that even dogs avoid it. Residents in large numbers find it a lesser evil to defecate alongside rail lines, canals, highways, or on other vacant land. Squatting in the open obviously exposes one to the unwanted gaze of those passing by, and women especially have to deal with lack of privacy and safety on a daily basis. As one woman explains: "A few of us [women] generally go together for the squatting. Men hide behind the bushes and watch women when they are squatting. If they see a woman alone, they creep in and molest her." To reduce the indignities and risks, women may shortchange their food intake so that they do not have the urge to defecate during the day, preferring to wait for the cover of darkness. Public toilets have been constructed in some squatter settlements by city or state governments. But poor design, low-quality construction, and poor maintenance often means that these toilets become unusable within a few years. Those who can afford it may have the option of paying to use private toilets attached to retail shops or offices, where shopkeepers and building watchmen make a profit by charging for each use. Such arrangements are impossibly expensive for those who are most destitute.


The health risks associated with inadequate sanitation are substantial. Outbreaks of cholera are the worst threat, but hepatitis and other infectious diseases also are spread by human fecal matter. There were nearly 600,000 cases of cholera worldwide in 2011, concentrated primarily in Africa. Parts of that continent have been going backwards in sanitation over the past twenty years, with cholera cases increasing by some 500 percent over that time span.

Water and Sanitation as Human Rights?

One of the simplest but most important questions for the twenty-first century concerns whether it is tolerable to let fellow humans live without access to the basic technologies that purify water and treat sanitary wastes. Can you even imagine living without hygienic toilets or a piped supply of pure water? In addition to the physical deprivation, health issues, fear or shame, and aesthetic disgust, the endless search for water and sanitary facilities is a huge drain on the time and energy of impoverished families also lacking in nutrition and medical care. Is it perhaps time to reconceptualize the understanding of human rights to include water and sanitation as technologies that everyone deserves? The nations that one refers to as "democratic" offer at least partial protection for basic political and legal rights, such as freedom of speech and religion, sometimes enshrining these guarantees in written documents such as the Bill of Rights of the U.S. Constitution. However, the Universal Declaration of Human Rights of 1948 went farther, declaring in Article 25 that "Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control."


Although the General Assembly of the United Nations endorsed that wording, the Declaration remains a hope rather than a genuine guarantee. For the right to an adequate standard of living actually has been fully implemented by only a handful of governments, principally in Western Europe; there is no such right in the U.S., as a visit to any inner city or rural area will reveal. With increasingly sophisticated technologies and the wealth thereby created, and with several hundred thousand civil and environmental engineers now trained, it perhaps is feasible for the world to follow through on some of the noble aspirations announced more than half a century ago. The United Nations Committee on Economic, Social and Cultural Rights declared in 2002 that the right to water clearly falls within the category of guarantees essential for securing an adequate standard of living, particularly since it is one of the most fundamental conditions for survival. The organization's leadership expressed a belief that Water belongs more to the economy of common goods and wealth sharing than to the economy of private and individual accumulation. While the sharing of water has often been a major source of social inequality in the past, todays civilizations recognize that access to water is a fundamental, inalienable individual and collective right. Well......there is considerable room for doubt about who recognizes what; but there surely is a profound question. Making access to water and sanitation a human right would be nothing more than extending the assumptions affluent people already have about our own rights. Just about every household in the U.S., Europe, and Japan has an ample supply of clean water together with safe sanitation. In marked contrast are the vivid images of squalor and filth televised from impoverished neighborhoods in parts of Africa, Asia, and Latin America. The worst of these problems are created less by poverty per se than by the fact that nearly half of the urban population lacks running water or toilets.


What Might Be Done?

The enormity of the worlds water and sanitation needs has not gone unnoticed. The World Health Organization of the United Nations is the primary international agency with responsibilities in the area, but also attacking the problem are the International Red Cross, Water Aid, and Oxfam, among others. They are attempting to raise necessary funds, muster expertise, organize residents of poor communities, and encourage city officials to tackle the challenge more vigorously. One highly controversial approach is privatization of public utilities turning the provision of public services over to businesses. Advocates believe this could improve efficiency, lead to greater infrastructure investment, and upgrade responsiveness to customer needs. On the plus side, Bechtel and other large companies do have sufficient funds to upgrade water supplies, build sewage treatment plants, and otherwise invest. However, critics of privatization do not believe that profit-making firms are appropriate to address the problem because the poor have such limited ability to pay. The record to date of privatization can be debated, but a substantial majority of progressive observers oppose it. In response to efforts by multinational corporations to turn water into a business, Uruguay adopted a constitutional amendment banning privatization of water and other infrastructure. An alternative to privatization is to make utilities more responsive to a diversity of needs through enhanced public scrutiny of administrative and financial actions. Cited as an illustration of how this could work is the Water & Sanitation Department in Porto Alegre, Brazil. Rather than having administrators make all the decisions, the water authority uses participatory budgeting: A representative set of consumers meets periodically to decide where new infrastructure is needed and how it should be financed. Poor peoples consumption is subsidized by higher charges on affluent customers and on non-essential consumption.


The political culture of Brazil is relatively favorable for such dialogue and cooperation, and Brazil has a good deal more economic strength than most African countries or the poorest parts of South Asia. So although the Porto Alegre experience is encouraging, whether the participation-and-subsidy model could be successfully exported is unproven. However organized, the most important barrier to new water and sanitation capacity is financial, because construction costs usually have to be borne upfront, and poor cities rarely can afford the large sums involved. A report of the World Panel on Financing Water Infrastructure at the 3rd World Water Forum, held in Kyoto in 2003, estimated that an additional annual investment of $100 billion would be required in addition to the $80 billion per year now being expended. Because such resources simply are not available in the locales that need better water and sanitation, either the investments will not be made or the money will somehow be raised elsewhere. Charitable donations are a possible source, and wealthy countries and their citizens donated almost that amount following the 2005 Asian tsunami. Would they really do so year after year, however, especially for the virtually invisible problem of everyday water supplies? A more systematic and reliable source of funds might come from a new global tax designed to finance a variety of needs (a possibility considered in later chapters). Money alone will not be enough, however. Even if funds were available to hire engineering expertise, machinery, and other resources, it would not be entirely clear how to proceed. Squatter areas are not like suburbs; they are comprised of dense clusters of houses, usually poorly constructed, and the streets are often too narrow for conventional vehicles. Knowledge of local customs and user needs would be essential if technical experts are to develop solutions adapted for the actual conditions of use rather than simply copied from the technologies that work in affluent neighborhoods.


Also arising are questions about ongoing maintenance of water and sanitation infrastructure. Whereas this is fairly automatic in rich countries, it may be unrealistic to assume reliable maintenance by governments of poor countries, some corrupt and some merely short on funds and administrative capacities. Users themselves might be a better bet, but many will use the facilities and few will do the work unless there are incentives. To protect fisheries against overfishing, it has proven helpful to have community ownership and decision making, and this might prove true for public water and sanitation facilities: Users who shared ownership and control via a neighborhood organization might be better able to enroll or shame others into participating in maintenance. Without claiming that such arrangements are a guarantee, there are many local and transnational organizations with considerable experience in how to organize local efforts in poor nations. *** In conclusion, no one could honestly say that it would be easy to bring safe water and sanitation to every human. But neither can anyone know that it is forever impossible. Whether to launch such an effort revolves around a choice that can be framed as a question: Do readers agree with the United Nations committee that said, The human right to water entitles everyone to sufficient, safe, acceptable, physically accessible and affordable water for personal and domestic uses? Or are you closer to believing that water delivery and purification, sanitary facilities and waste treatment, and other technologically mediated goods and services ought to be reserved for those who can afford to pay? More generally, if you would consider endorsing universal access to clean water and safe sanitation, are there other technological benefits such as protein, vaccines, and schooling that also might deserve consideration as belonging in the same category? Wherever you stand on the subject of fairness at this point, by what criteria would you propose to determine who deserves what?


Chapter 4. When Innovation Proceeds Too Slowly

On first inspection, it might seem that beneficial innovation would happen

more or less automatically because businesses have strong incentives to develop new products that will sell and many consumers of course love being early adopters of innovative electronics and other new stuff. Altogether, then, just about every possible niche should be occupied, should it not? Not quite true, but the omissions can be difficult to see because it easier to perceive bad things happening than good things not happening. One can hardly fail to observe burning Pinto gas tanks, space shuttles exploding, Fukushima reactors being flooded, or the World Trade Center collapsing. Reducing harm from such calamities certainly is a crucial facet of improving technological governance; but it may be equally important to notice what is missing, so as to accelerate development and diffusion of beneficial innovations. This chapter seeks to understand why certain types of technological innovations might be missed or under-emphasized for extended periods. Dengue fever (pronounced dengy). As the name connotes, victims suffer fevers as high as 105 degrees Fahrenheit, together with other symptoms including body aches and rashes. Some 50 to 100 million new cases appear each year, the estimates imprecise because symptoms are difficult to


distinguish from other illnesses. Dengue fever is carried by species of mosquitoes prevalent in tropical and subtropical areas inhabited by 2.5 billion people. More than 90 percent of the victims have mild symptoms or recover fairly quickly; but a percentage of those who catch the illness a second time develop what is known as dengue hemorrhagic fever, which carries a 50 percent mortality rate if untreated. Dengue fever is listed by the World Health Organization as one of sixteen tropical diseases considered to be neglected by biomedical research and treatment providers. This is true despite the fact that one of the so-called Founding Fathers of the U.S., Dr. Benjamin Rush, identified the disease during a 1779-80 epidemic in Philadelphia. Albert Sabin (developer of the polio vaccine) discovered in 1944 that dengue fever is a viral infection transmitted by mosquitoes; the virus belongs to the Flavivirdae family, which has more than 70 known members including Yellow Fever and the Japanese Encephalitis Virus. Despite the long history, there still is no vaccine against DF, although efforts are underway and WHO projects that a vaccine may be available by 2015. Why the lengthy delay? There is no systematic historical inquiry into this or most other "Why not?" questions, because historians typically deal with what DID happen. However, it is apparent that biomedical researchers tend to target diseases suffered by affluent white people in the northern hemisphere. When a vice president for R&D is authorizing research expenditures, the choice typically will be based partly on whether there is likely to be a paying clientele; to protect against the shingles virus, for example, affluent older people are now paying up to $300 a dose. More generally, pharmaceutical companies tend to skimp on vaccines of most kinds -- finding it more profitable to emphasize drugs for curing disease rather than drugs for prevention. In addition to economics, racism and ethnocentrism probably have contributed to keeping dengue fever out of the public mind in countries


where most of the scientific talent resides. As the title of a recent book by George Washington University physician Peter Hotez puts it, Forgotten People, Forgotten Diseases: The Neglected Tropical Diseases and Their Impact on Global Health and Development. Green Housing. However important financial considerations may be in slowing some potential innovations, consumer and producer psychology when it comes to deeply entrenched ways of doing things. One of the first American green housing communities was that of Village Homes, begun in the 1970s in Davis, California. An environmentally designed subdivision, it featured sensible solar orientation, extensive parks and greenways, integral bicycle routes, community gardens, and natural surface drainage doing away with costly storm drains and creating landscaping that needs less irrigation in a very dry climate. Initially shunned by real estate agents and conventional homebuyers, it has become a dearly loved neighborhood with a delightful ambience, lower utility and food costs, and a strong community spirit described in contemporary real estate brochures as 'Daviss most desirable subdivision." Why, then, have so few subsequent developers emulated the successful experience? Why do home buyers continue to accept, even embrace, old-fashioned housing that is unnecessarily energy intensive, located in clumsily designed neighborhoods, using particle board with glues that off-gas formaldehyde, and otherwise appears almost deliberately designed to be environmentally inferior? Considering that heating systems and other materials/methods now available are superior to those incorporated forty years ago in Village Homes, why does that subdivision remain so far ahead of the norm? Natural Gas Vehicles. Discussed for decades have been the dependence on the Middle East and Venezuela for oil, the possibility of being drawn into military conflicts to protect the sources (or the profits of the oil companies), the pollution caused by refineries and oil transport spills, and the


contributions to urban air pollution. Natural gas is generally regarded as superior in all respects for the U.S., given plentiful supplies (though opponents of fracking might not accept that it is clean). Still, compared with the alternatives, natural gas is especially attractive for urban transportation, emitting fewer particulates, less carbon dioxide, and lower levels of smog-causing nitrogen oxides. Natural gas vehicles (NGVs) are a proven technology, with thousands of city buses, taxis, and other fleet vehicles converted to natural gas starting in the 1970s to reduce air pollution and to guard against unreliable supplies of gasoline and diesel fuel. More than 20 percent of new buses purchased in the U.S. and the EU are designed to operate on natural gas, as do an increasing share of medium- and heavy-duty vehicles. Nine million NGVs are in operation worldwide, led by Pakistan, Brazil, and Argentina with nearly two million apiece. General Motors sells CNG automobiles in Latin America and in Asia. In Italy, one can purchase many different natural gas Fiat models off the showroom floor; and Ford, Volkswagen, Mercedes, Citroen, Peugeot, and GM each offer at least one Italian CNG model. Some have separate tanks for natural gas and for gasoline, and switch automatically. In the rest of the EU, Volvo and other European manufacturers offer CNG buses and heavy trucks, but do not make natural gas-powered automobiles readily available. Toyota and Ford used to manufacture CNG vehicles for the U.S., but abandoned the market -- leaving only the Honda Civic GX, which is manufactured and sold in very small quantities despite routinely winning Green Automobile of the Year awards. What is curious about the inattention to CNG autos is that the phenomenon that Langdon Winner terms technological somnambulism typically involves esoteric innovations out of public view, such as robotic cannons or cloned meat. Out of sight, out of mind, as the clich puts it. But natural gas vehicles obviously do not quite fit that category, for virtually everyone


is familiar with natural gas via home heating, hot water, cooking, and grilling. How much of a stretch could it be to suppose that vehicles might be powered by the fuel? Although technological momentum favors gasoline because of the network of existing gasoline stations and repair shops, the advantages hardly constitute lock-in. For example, urban gas stations on streets with natural gas service could actually boost sales by adding a CNG refueling option, and governments already reimburse a nontrivial fraction of the costs for businesses that purchase alternative fuel infrastructure. Whereas price is a major barrier in consumer adoption of some alternative energy technologies such as rooftop photovoltaics, new CNG automobiles can be manufactured at scale for less than hybrid versions of the same car. Fuel costs heavily favor natural gas: While oil prices quadrupled in 2007-2008 and gyrated thereafter, natural gas prices declined by almost half and supplies outstripped storage capacity during part of 2010. Moreover, the Honda GX has for many years qualified for a $4000 U.S. tax credit; So price is not a good explanation for the public quiescence. Nor is R&D funding an issue given that automobile manufacturers spend huge sums annually on product development and retooling. Competing ferociously for market share coming out of bankruptcy, why has GM not chosen to take competitive advantage of the CNG technology it is already selling profitably elsewhere? Detroit executives might ascribe the decision to shrewd reading of the North American market, except for the fact that there was so much demand that Honda's complete 2009 production run of CNG-powered Civics sold out before a single one actually had been manufactured! What about the political-economic power of the major oil companies, OPEC, and their assorted allies? As plausible as these culprits may seem, strong countervailing pressures come from well-capitalized natural gas companies. And some of the businesses one thinks of as "oil companies"


(e.g., ConocoPhillips and ExxonMobil) actually sell almost as much natural gas as gasoline. Nor is there any evidence of recent lobbying to interfere with NGVs. By process of elimination, then, one is led to the key players normally catalyzing environmental progress: Not-for-profit environmental organization, none of which has done anything to put the issue of natural gas vehicles in the news or on the political agenda. Thus, Sierra Clubs "Green Cars" program emphasizes improved fuel economy, reduced driving, and hybrids/electric cars, with no more than a brief mention of NGVs. Environmental Defense helps Yahoo Autos to rank automobiles on a greenness scale, with the CNG Honda Civic a perennial winner; but the EDF program director declines to push for NGVs (or any other specific transport technologies), averring that "the market should decide." The Natural Resources Defense Council (NRDC) generally favors alternatives to coal, oil, and nuclear, but a 2008 NRDC report casually dismissed natural gas-powered automobiles as inferior to hybrids. There is no clearly correct pathway for surface transport (except that trains are definitely superior to trucks over long distances). But I suspect that CNG vehicles have gotten short shrift less because of disputes over their merits than because of organizational dynamics what might be called the environmental executive's predicament. Any not-for-profit organization has to build membership and raise money by implicitly asking, "What issues will galvanize existing members and attract new ones?" Passionate environmentalists likely to join and donate typically care about whales and polar bears, climate warming, and other headline causes; and they tend to associate energy greening with new, "clean" alternative sources. Hence, executives of environmental organizations are pretty much bound to emphasize the deep green, charismatic energy alternatives -- solar and wind power, and to a lesser extent fuel cells, algae, and other esoterics. Natural gas may offer more realistic promise over the next decade or two, but such a


humdrum topic makes poor advertising for an environmental organization looking to build membership and budget. Geothermal Heat Pumps. Another energy innovation that receives almost no attention in the media or in environmentalists' proposals for reduced use of coal and other fossil fuels is the geothermal heat pump (GHP). Lord Kelvin discovered the concept of the heat pump in 1852, but it took until the 1940s for the concept to become a viable technology, and another twenty years before commercialization. Thus it has been possible for half a century to purchase off-the-shelf geothermal heating and cooling systems, but they remain less than one percent of installed HVAC capacity. Most of the installations are in the East and the upper Midwest, with Illinois, New York, and Pennsylvania predominating. Not to be confused with geothermal hot springs or with conventional air-source heat pumps that look like central air conditioners, GHP systems take advantage of near-constant year-round soil temperatures to heat and cool interior spaces. A simple pump circulates heat-transferring fluid within long lengths of closed-loop, polybutylene tubing, usually buried horizontally at depths of 6-18 feet depending on the frost layer. A heat exchanger at ground level controls the temperature of the building, either through ordinary forced-air fans that circulate the heated or cooled air, or through radiant floor heating. Heat pumps are quite efficient because they leverage a small amount of electrical energy to push a larger amount of heat energy from one environment to another. GHPs can be implemented economically on either a residential or a commercial scale. Schools are ideal, with playgrounds and parking lots offering vast spaces for burying horizontal cooling loops. Despite the many advantages, only 86,000 geothermal units were installed in the US in the most recent year for which data is available -- compared with 6.4 million central air conditioning and other heating-cooling systems.


Geothermal Strengths and Weaknesses

Advantages Low operating cost Low maintenance Long life expectancy Unlike solar, requires no supplemental furnace Low-cost integrated water heating No exposed outdoor equipment Low noise Disadvantages High initial capital cost Difficult to find experienced installers Low probability, but potentially high cost of repair if coolant loops damaged Local geology and soil type may increase initial cost Requires accessible ground for drilling or trenching

Why have GHP systems failed to gain significant market share? Given all the advantages, coupled with the many disadvantages of alternative systems (such as rapidly increasing electricity costs for conventional air conditioning in hot climates), the facts of the matter could not be the explanation. There must be social factors at work shaping perceptions and behaviors. The subject has been almost entirely absent from public discourse, with hardly any newspaper articles or other media coverage. This, in turn, is due partly to the fact that environmental groups have done nothing to create news regarding geothermal. For example, the Environmental Defense Fund makes global climate change one of its main priorities, yet the organizations extensive web site and publications tout solar, wind, and esoteric forefront energy technologies while ignoring geothermal. Another reason for GHPs lack of market penetration is the same one facing other green innovators: Deeply entrenched ways of doing things, social momentum. Village Homes, the innovatively designed green community in Davis, California, affords a somewhat larger version of the same phenomenon characterizing geothermal heating/cooling: Those actually


living there are pleased, but few others apply the innovation in their own projects. GHP systems remain uncommon in the rest of the world, too, but that may be changing as governments are tightening energy standards. For example, Britain's Minister for Climate Change in 2011 formally opened a new shopping mall in London, heated in part by what is claimed as the largest ground-source heat pump in Europe. Banking practices also interfere with geothermal. Lenders base mortgages on the income of a loan applicant, and monthly payments cannot exceed a certain percentage of income. That should be no problem, because a homeowner with lower monthly utility costs obviously can afford a higher mortgage payment that includes paying for the GHP system. However, banks typically do not take utility costs into account! This prevents homebuyers from borrowing the extra $20 thousand needed to pay for a geothermal system, thereby inducing them to purchase, say, a natural gas furnace and electric air conditioning because these are cheaper initially even if more expensive in the not-very-long run. There is an additional complication: The key decision maker on HVAC equipment for new homes is usually the developer/builder, often a large, publicly traded firm whose executives are acutely aware of standard lending practices. Indeed, they often pre-arrange financing for customers, and they try to provide the most attractive product possible within each homebuyers income bracket. A small increase in price can move a house up to a more expensive market segment pricing it out of consideration by the target buyer. Builders therefore have a disincentive against incorporating energysaving measure if it increases the price they have to charge. Another piece of the story: Real estate agents only get paid when they make sales, and they therefore have an incentive to steer clients toward houses likely to be appealing. If clients are thought to desire the biggest/"nicest" house for the least money, and are unaware of the advantages of


geothermal, why not show them the same old thing? If an agent must defend the additional cost of a GHP system to a would-be buyer who has not requested it, the probability of a sale drops. Why take that risk? In many locales there are few, if any, licensed GHP installers -- only a single certified installer in the entire state of Arizona as of the latest date for which figures are available. To implement geothermal on a large scale would require training of a skilled workforce -- and no one in the construction or heating/cooling business has an incentive to invest the time and money that would take. Left to themselves, then, all the key players in the industry are likely to behave next year pretty much the same ways they behaved this year, last year, and for the previous two generations. One rather unhappy way to summarize the barriers to adoption of geothermal innovation is to say that ignorance, habit, lack of expertise, and money constitute a powerful combination for maintaining the momentum of conventional heating and cooling practices.

*** The takeaway lesson from this chapter is simple: Just because people have needs, and just because technoscientists might be able to help meet those needs, does not mean that economic, political, cultural, and other barriers may not interfere. Identifying the barriers and conceptualizing ways of reducing them are important tasks in evolving a more intelligent technological civilization. However valuable it is to head off dangers, it may be almost equally important to head off obstructions to beneficial technological innovations.



An automobile driver is not effectively in control if only the steering wheel

and accelerator are operational; there must also be brakes. On reflection, it is apparent that there is a direct analogy to technological steering: Direction alone is not sufficient to innovate wisely, nor is it sufficient to be able to accelerate as is needed for research and innovation moving too slowly; there also must be ways of slowing to an appropriate pace when circumstances warrant. To a 21st-century young adult, rapid technological innovation probably seems entirely natural and desirable. In fact, however, there have been many eras and many cultures where technological innovation did not look very much like it now does; so innovation must not be entirely natural. In fact, if one looks just beneath the surface, it becomes obvious that contemporary developments in technoscience are being deliberately accelerated by governments, businesses, and universities. For example, businesses can deduct from their taxes all monies expended on R&D, the National Institutes of Health in the U.S. now takes more than $30 billion annually from taxpayers and gives it to biomedical researchers, and U.S. military R&D is about ten times higher than that. When rapid change becomes the norm, those who argue for moving more


slowly tend to be perceived as weird. Where innovation does come slowly, except for communities like the Amish, the slow pace is not usually due to deliberate choice. Rather it has to do with the momentum of the status quo (e.g., housing and conventional energy), or because there are significant barriers in the way of moving rapidly (as in countries with corrupt governments, lack of venture capital, and other handicaps). There is no one best pace, of course. Some innovations such as those discussed in chapter 4 ought to come quickly, others slowly or even not at all. In some places, in certain eras, rapid change may be very sensible given peoples needs and opportunities. In other times and places, slower change may be preferable. The appropriate pace of change depends on context. Thoughtful people may disagree, moreover, about the rate at which a given technology should be researched, prototyped, and diffused. No one has a definitive calculus for such matters, and disagreements are likely for any significant technology. In the end, those disputing the matter must turn to voting or another method of social decision for authoritative resolution. This chapter analyzes some of the obstacles to intelligent, humane pacing of technology, and proposes several strategies for altering the pace of technical development if enough people were to decide that there is a problem with the present rate of change. My emphasis is on ways of slowing technological innovation, because those with influence already know a great deal about speeding it up. They do not need any help, whereas those skeptical of rapid pace do need help thinking about their predicament. I do not put great faith in any particular reform discussed in this chapter, because the rudimentary notions advanced here would require further scrutiny, more precise formulation, and experimentation. The intention is not to prescribe exactly what ought to be done, then, it is to help liberate our imaginations. How might the speed of technological innovation be modulated, if and when enough people with influence get ready to do so?


Thinking About Pace

No one knows -- even roughly -- how many technological innovations will occur this year, or how many actually did occur last year or in the past decade. The diversity of phenomena that fall under the heading of innovation is simply beyond the ken of social statistics at this time. Still, everyone knows that the number per decade is quite large. What about changes in the rate of change: Are things speeding up? It is common to hear superficial assertions to this effect, such as those in the once-popular book, Future Shock, which heralded and agonized over the "premature arrival of the future." Another one-time bestseller in this genre was Megatrends. And Jonathan Gleick wrote an unmemorable book titled F*STER. The image evoked in such popular treatments is of a constantly accelerating rate of change, as depicted by an exponential growth curve. Historians of technology assure us that this idea is nonsense; within any given technoscientific arena, there have been rapid bursts followed by periods of relative quiescence punctuated equilibrium. And historians of medieval technology argue that even in a period now perceived as stagnant, there actually were remarkable changes occurring (stirrup, plow) that could have seemed rapid to those alive at the time. In the mid to late nineteenth century, the railroad certainly had social effects as profound as those roiling the twenty-first century. Unfortunately, the historians cannot tell us whether the present age may be different from the past. It is possible that the overall rate of change is increasing. But you and I are not in a good position to know, in part because governments have not made the requisite investments in social statistics and historical analysis that could help clarify the situation. What one can say for certain is that some thoughtful observers believe that the present pace of technological change is too rapid, and their reasons seem worth pondering. For example, Paul Goodman suggested several decades ago that
Since we are technologically over-committed, a good general maxim


in advanced countries at present is to innovate in order to simplify the technical system, but otherwise to innovate as sparingly as possible.... The issue is not whether research and making working models should be encouraged or not. They should be, in every direction.... The point is to resist the temptation to apply every new device without a second thought.... (New) ideas may be profitable for private companies or political parties, but for society they have proved to be an accelerating rat race.

Lord Ritchie-Calder, a distinguished science journalist, observed a decade or two before personal computers or cell phones that there is so much knowledge that we cannot cope with it effectively and so much technology that it tyrannizes us. Boris Pregel a half century ago believed that The most outstanding feature of modern society is the acceleration of technological changes. These changes affect all aspects of our daily life and the man in the street has great difficulty in assessing the consequences of this impact. Engineering commentator Edward Wenk proposed in the 1970s that The swift pace of technological change no longer matches the response time of human affairs; technical prowess may exceed the pace of social skills, especially our ability to anticipate second-order consequences and take crisis-avoidance measures. Regrettably, while these thoughtful observers agree that there is something important about the pace of technology, their extended works do not clarify the problem much more than do these brief quotes. Moreover, it is difficult to separate out their concerns over the pace of technology from their concerns over the direction. The speed of technical change typically gets a small mention, and the majority of books and articles then go on to discuss directions. Another genre of thinking on the subject is typified by a statement of Stewart Udall, a former Secretary of the Interior with a well deserved reputation for humane concern and ample knowledge: I don't think at all has outraced society, but only that society has been laggard. Udall goes on to advocate better education of the young, so they will


understand how useful science is. He does not explain how this sort of education will work better than the grievously flawed efforts being made to teach reading and math in the 1970s when he wrote (and still today). Nor does he explain how ordinary people could possibly learn enough to keep up with technoscientific experts whose organization, funding, incentives, and other resources are elegantly attuned to promote rapid change. Nor does Udall appear to see that it is logically dubious to quarrel about whether science is going too fast or society is going too slow: the issue is how to bring about an overall relationship among technoscience, business innovation, purchasers' behaviors, and what is good for the rest of humanity and other living creatures. What does the public think about the pace of technology? Few people have informed opinions on the subject. But a high percentage admit that they sometimes feel overwhelmed by it all, wish that things didn't change so fast, or otherwise express opinions that seem to reflect concern about the pace of change. Even many scientists do not understand the technologies they use every day outside the laboratory, and they can feel caught on a treadmill even when they also are helping to propel the very same treadmill. There surely are mechanisms by which the pace is regulated in particular instances: Businesses often accelerate product development and production when they see potential customers, and back off when customers appear unreceptive to an innovation as when video phones waited many years for widespread use. New medical devices such as heart valves, catheters, shunts, and the like now undergo screening that slows down their introduction while developing information about safety and efficacy. New chemicals are subject to PreManufacture Notification disclosure and testing requirements.


Military R&D is paced in part by the decisions of elected officials, who accelerate and decelerate in response to budgetary and foreign policy fluctuations.

However, most industrial technologies and many consumer technologies fall outside the scope of any of these regulating mechanisms other than the first: If a business executive believes that an innovation can be sold profitably, the business typically is free to attempt to find willing purchasers. This approach to decision making has important benefits, but the choices are sensible (if they are) for the parties with a direct interest in the innovation, and they are sensible with respect to one specific product at one time. No one is looking at the overall ensemble of innovations across time periods; and no one is looking out for third parties who are not directly involved as sellers or buyers (which, for most items, is a majority of humanity).

Trial and Error Learning

To take the subject out of the realm of pure opinion, consider one of the best-established facts concerning human social life: Most improvements in most arenas of instrumental action come about via trial-and-error learning from experience. This simple reality turns out to have profound implications for thinking about the appropriate pacing of technological change. Learning which technologies are good for whom (and how to make them better for more people) is no exception to the rule that learning from experience is usually crucial for successful action in complex new endeavors. Trial-and-error learning has a few simple requirements: * Someone has to be paying attention to feedback that indicates a possible error. * Someone must have time and resources for interpreting the feedback. * There must be venues, incentives, time, attention, and other


ingredients for deliberating and negotiating changes that may be warranted on the basis of the feedback. * Those responsible for the technology in question must have incentives to modify their behavior on the basis of the feedback and the negotiations concerning it. Further discussion of trial-and-error learning will be the subject of later chapters, but from just the few ideas sketched above, one can discern some of the implications for pace. First, the number of trials occurring at a given time will have a significant bearing on a societys ability to monitor, diagnose, and correct errors that inevitably occur in all human activities. Other things equal, more trials at a given time means less attention to each trial. Likewise, the length of time a given trial remains in operation before being changed will affect learning and correction capacities: It is likely to take quite a while to notice that something is going awry, and even longer to renegotiate it. Hence, if the pace of change is too rapid, there is an excellent possibility that "society will go adrift in a sea of unintended consequences." Examples have begun to accumulate of rapid pace as a barrier to trial-and-error learning. It now is widely acknowledged, for example, that the problems of the U.S. nuclear industry stemmed in part from haste: Giant nuclear reactors were ordered and designed before sufficient operating experience had been obtained with small reactors. The industry also scaled up before figuring out how to deal with nuclear wastes. And reactors went into construction and operation before ascertaining whether the technology would be acceptable to enough members of the general public. The specifics are different, but many other technological endeavors have proven problematic due partly to the rapidity with which they were undertaken. The US and the USSR did not just build nuclear-tipped missiles; government officials authorized and funded tens of thousands within two decades. Synthetic organic chemicals: Too many, too rapidly,


before much could be learned about health and environmental effects. Suburbanization essentially prevents public transit and forces people to drive long distances we now know. Shopping malls outside cities undermine city businesses, contribute to white flight from inner cities, reduce the tax base available to fund urban schools, and generally redouble urban problems. Had suburbs and malls been constructed more gradually, there would have been more time to realize and re-evaluate. More cases could be added, but the point is apparent: An overly rapid pace can obscure defects in design and implementation of technological innovation, making it difficult to perceive accurately what is happening, difficult to propose and debate alterations, and difficult to implement ameliorative plans.

Culture as a Public Good

The psychological and social environment people inhabit is largely a public good that cannot be bought and sold (such as clean air, helpfulness of strangers). An important facet of one's cultural environment consists of the largely intangible social consequences of technological change. Because understanding of the cultural environment and how to measure its "health" is so rudimentary, no one knows how costly the intangible psychosocial losses may be. But economists exhibit impressive agreement that wherever prices fail to capture relevant costs, market transactions will misallocate scarce resources. Not counting the costs of unwanted cultural changes in the price charged for new goods and services means that prices will be lower. People therefore can be expected to "purchase" more technological change than if those who benefited had to pay for the cultural costs being thrown onto the public. There is another way to put the same point. Social stresses these days are considerable. Civil wars and other organized violence are probably the worst. But disruptive social changes also occur in less obvious ways, as in


energy boomtowns where new oil and gas discoveries bring an influx of workers and their families. This seems like a good thing, but it requires almost instant development of housing, schools, water and sewer infrastructure, roads, and other services followed almost inevitably by rapid decline when construction jobs are gone and the wells no longer require many workers. Phoenix, Las Vegas, and other western cities grew rapidly, and then were hit very hard by the Great Recession of 2007. And financial insecurity eats at a great many people, including those who from the outside would seem to have stable employment and a comfortable lifestyle. The evidence is mixed and none of the statistics inspire great confidence; but it seems |likely that technological changes in the twentieth century helped foster a variety of undesired ills: Homicide rates increased dramatically; suicide attempts increased; rates of violent crime accelerated (but recently have declined); and divorces went from rare to commonplace. Concepts and measurement of alcoholism, drug addiction, and mental health have varied so much that comparisons with earlier periods are even less reliable than for the other measures discussed above. But the problems appear worse, and at least are no better despite the proliferation of relevant pharmaceuticals and treatment plans. Because there is a partial tradeoff among ones own goals, and even more tradeoffs among different peoples and different cultures needs, those making decisions about technological innovation presumably would want to be very careful not to sacrifice the intangible, cultural goods while seeking to fulfill economic and material aspirations. If it seems worth introducing technological changes that will cause disruptive social and psychological changes, it would be nice to at least get the most material good with the least cultural values sacrificed. How to achieve that is not exactly near the center of contemporary conversation and inquiry.


Cognitive Shortcomings and Legacy Thinking

Another cause of the pacing problem results from the way humans think. I do not refer to what they think about; I mean the processes by which people filter information and reach decisions. Aside from college and occupation, among the most important choices people make are major purchases such as houses, automobiles, and furniture/appliances. Salespersons in these businesses report that impulse is a major factor governing many buying decisions. I hesitate even to mention the crazy choices many people make in love and marriage. And what fraction of parents really know what they are doing when they choose to bear children? There are dozens of ways that people level, sharpen, disregard, misconstrue, and misremember information. These biases are most likely to operate (1) when there is much uncertainty and the events concerned are probabilistic in nature, (2) when situations are very complex with many possible relevant factors to be considered, and (3) when strong beliefs influence the forecasts. Most decisions about technology involve some sort of forecast, entail considerable complexity and uncertainty, and require some probability estimate. So, on top of the inherent complexity and difficulty of making wise judgments under rapidly changing conditions, flawed judgmental faculties reduce the odds of sensible choice. The above shortcomings no doubt contribute to the ease with which most of us fall into legacy thinking, believing without much investigation the stories we are told by others. One especially pertinent here is the belief that we "all" benefit from scientific and technical "progress." Note, however, that the gap between rich and poor nations is growing; and the gap between rich and poor within the U.S. has grown fairly steadily since 1970 -- and has widened even more rapidly since about 2003. The expense of medical technology is so great that health care costs have become prohibitive for those not covered by public or private medical insurance. Many other examples could be given, but the basic point is obvious: Some people benefit significantly, some are hurt significantly, and many remain about


the same after an innovation as before. The idea that we all benefit roughly equally is a lie. For the problem of pace to be tackled, another belief that would need reexamination is the powerful notion that underlies this chapters entire discussion: Even if everyone wanted to, the saying has it, "You can't stop progress." As long as enough people continue to believe that, it will remain true. If enough people stop believing it, the idea will become much less potent. There are a number of indications that the idea of progress (and the rate of it) may be subject to redefinition. While it would be foolish to count on changes in key cultural assumptions of the magnitude discussed above, it is equally mistaken to rule out the possibility. Humans now have rejected human sacrifice and cannibalism. Slavery, too, is a thing of the past in most of the world. Racism and sexism are greatly diminished, and may gradually be on their way out. Religious intolerance declined for most of the 20th century, but has enjoyed a rebound. Within the affluent nations, the poor no longer are sent to prison and a variety of social welfare programs have mitigated the harshness of their lives; further steps, such as a guaranteed minimum income, have been proposed and may someday be enacted. None of these steps were taken purely on pragmatic grounds; all required sweeping changes in cultural assumptions and in individual thinking. So it is too pessimistic to be certain that humans will not stumble over a long period toward new assumptions about pacing technological change.

Possible Tactics
One reason for a bit of optimism about prospects for modulating the pace of change is that potentially effective levers for slowing pace already are readily available. Much of what is required is merely to do the equivalent of taking the foot off the accelerator. It would not even be necessary to press a brake, for without active governmental funding, most areas of science


would slow down appreciably or even grind to a halt. If taxpayers presently are giving such support, surely they have a right to withdraw it? The situation is not quite so clear for most technologies. Only those dependent on military or other federal funding would fall directly into the easily slowed camp. However, there is a more subtle way in which taxpayers unwittingly stimulate the pace of innovation: The expenses of developing and putting a new product on the market presently are tax deductible; this includes not only plant and equipment, but salaries for accountants, attorneys, salespeople, and factory workers. Through the investment tax credit, in fact, taxpayers actually subsidize up to forty cents on every dollar spent on certain kinds of innovation. This may be a very good way of helping reduce the risks of innovation in high-priority fields where important needs depend on technical change. As an across-the-board strategy for pacing technology, however, tax credits are a dubious device because they only have a forward gear and accelerator. So what about adding two additional categories for tax-paying businesses? Leave the tax credits in place for the highest-priority innovations, and continue allowing tax deductibility of expenses associated with medium-high priority innovations. But create a third category into which would fall most types of innovation: For such innovations, none of a businesss expenses would be tax deductible, so shareholders would have to take all the risks. A careful civilization could take an additional step: Create a fourth category (at the opposite end of the spectrum from the tax-credits-for-importantinnovations end): Impose tax penalties for especially destructive or antisocial innovations. Go ahead and innovate if you choose, but your business will pay additional taxes for each dollar devoted to innovations falling into the undesirable category. Note that even with such a severe tax policy, innovation would not be halted for it still would pay a corporate decision maker to pursue innovations sufficiently valuable to return a good profit on


top of the tax penalty. Because there would be fewer innovations under these scenarios, people might value each innovation more highly. This should lead to a willingness to pay more for the relatively few that make it onto the market, as well as greater cultural appreciation for the new technologies a public good.

So where has the analysis gotten? Some of the most thoughtful observers of the human condition have drawn attention to the rapid pace of technical change as a problem for more than a hundred years. And the evidence linking rapid change to alcoholism, mental illness, family stress, and other problems is pretty good, even if not strong enough to make an airtight case. It seems incontrovertible that trial-and-error learning is made more difficult by rapid change, and such a method of learning is the main technique humans use to successfully shape complex new endeavors. The economists offer a potent theoretical insight concerning the inevitable market neglect of public goods such as the psychocultural costs and benefits of innovation. Because people have diverse goals -- some of which are helped by technical change and others of which are hurt -- it makes sense to move in such a way as to be able to choose the changes that will help the most and hurt the least: that almost surely implies a modest rate of change. Each of the above reasons by itself is suggestive, and together they may constitute a compelling case for reconsidering whether faster is better. Four thoughts to ponder in closing: (1) Big ideas usually look impossible at the outset. (2) The belief that "You can't stop progress" is a social concept, not a fact about the world; if enough people come to desire it, technoscientific change certainly can be slowed. (3) Technoscience is not sacrosanct -- it is a social activity


supported by taxpayers and other social institutions; according to democratic ideals, ordinary people have a right to help make decisions about their collective fate. (4) However unlikely it now appears, there is a chance that there could be a sweeping change in attitudes regarding uncontrolled technical innovation, perhaps roughly equivalent to the surprising upheavals that ended slavery and Soviet communism. In sum, although controlling the pace of technological change certainly is farfetched at this point, no one knows what may become possible. Selecting and creating an appropriate pace of change is one of the central tasks that would have to be taken on by a wise technological civilization. If pace cannot be sped up and slowed down to meet diverse needs in various contexts, then a significantly improved civilization probably is impossible.


CHAPTER 6. PLUNGING AHEAD VERSUS INTELLIGENT TRIAL AND ERROR At the outset of complex new endeavors, no one fully understands even the
direct and intended consequences, much less the secondary and tertiary unintended consequences. The benefits of technological innovation can be quite wonderful, yet the harms can be quite severe; the ratio of good to bad depends in part on how those with greatest influence approach their tasks. It therefore makes sense to figure out how individuals, organizations, and world civilization as a whole can develop needed technologies in a timely way, while minimizing harms and assuring reasonable fairness in who gets what. No one fully knows the answer to that multi-part riddle, but the basic outline is surprisingly easy to understand. A starting point is to acknowledge that humans rarely proceed satisfactorily in new activities except by learning from experience. That means trial and error usually will be unavoidable, and the only question is whether the trials will be blind and uncontrolled, or carefully structured. At least when it comes to major innovations that can significantly damage millions of people, some kind of precautions usually are more sensible than plunging blindly ahead. What would be entailed in arranging for more careful trial and error? How can individuals and organizations develop and use better strategies for proceeding in the face of uncertainty? Flipping that question over makes it easier to answer, because some of the main pitfalls in trial-and-error learning are fairly obvious:


(1) A misguided innovation (or failure to innovate) may produce unbearably costly outcomes before error correction can occur; (2) Innovative actions may retain too little flexibility, preventing errors from being corrected readily; and (3) Learning about errors may be very slow.

If those pitfalls can be avoided, then trial-and-error learning will be more probable, faster, and less damaging.

Potentially Unacceptable Risks

How can technoscientists and others participating in an innovative trajectory cope with potentially unbearable risks? Even in highly uncertain endeavors, it is possible at the outset partly to foresee and protect against some of the worst risks. Homeowners, for example, do not have to calculate the likelihood of their house burning down; merely knowing that it is an unacceptable possibility is enough to warrant obtaining insurance as an initial precaution against catastrophic loss. Likewise, rather than relying entirely on preventing all accidents, U.S. nuclear decision makers required containment buildings around civilian reactors; most of the radioactivity at the Three Mile Island nuclear plant in New Jersey was thereby prevented from entering the environment. If the Soviets had taken this precaution instead of assuming impeccable performance by their nuclear plants, the 1986 accident at Chernobyl probably would have had less serious consequences. And the Japanese reactors that released so much radioactivity following the tsunami and earthquake of 2011 obviously lacked adequate containment. Other tactics would be appropriate for different types of challenges, but the basic idea is to take some kind of initial precautions rather than merely hoping for the best. The precautions will not prevent problems, but can make them less costly.


If uncertainty is high and consequences are potentially severe, moreover, it makes sense to take especially stringent precautions. Thus, in 1976-1978, government officials had to decide whether to take action on potential depletion of stratospheric ozone by chlorofluorocarbons (CFCs). There was no solid, direct evidence that such depletion was occurring, and the American Chemical Society complained that proposed legislation would constitute "the first regulation to be based entirely on an unverified scientific prediction." Nevertheless, Congress and EPA acted to ban most aerosol CFC sprays, even though few other nations initially did so. The caution was soon proved warranted, as evidence of chlorine-catalyzed ozone depletion quickly became irrefutable. Another aspect of proceeding cautiously is to put the burden of proof on advocates of risky activities. Whereas government once had to go to court to prove a pesticide unsafe after it had produced substantial damage, manufacturers now are required to demonstrate prior to marketing that their products do not pose "an unreasonable risk." This tactic is imperfectly applied in current pesticide regulation, but the burden of proof has gradually shifted toward proponents of potentially risky chemicals. Government officials responsible for air safety likewise require Boeing and other aircraft manufacturers to demonstrate that new airliners can withstand high levels of turbulence, can deal with engine fires, and otherwise are prepared to cope even with extreme conditions.

A second problem with trial-and-error learning is that by the time serious flaws become apparent, an innovation may have become quite resistant to change. Imagine if it now turned out that cell phones cause brain cancer! Technologies become deeply enmeshed in the careers, lifestyles, and expectations of those manufacturing or using the technology. It therefore makes sense to design new endeavors from the outset so that they can be altered fairly readily, should unfavorable experience warrant.


For example, flexibility is higher when costs are borne gradually, allowing expenditures to be redirected as learning develops. Payment on performance is far more flexible than when the bulk of the costs have to be made in advance. Think of the large, up-front capital investments entailed in constructing Tokomak reactors for fusion energy plants; if the innovators hopes do not work out as they have not for the past half century the investment will be irrecoverable. Fusion advocates presently are using exactly the same time frame as when I first heard about the technological hope fifty years ago: Feasible in about twenty years. If the past half-centurys funding for fusion and other high-tech energy dreams had been devoted instead to incentivizing ground-based geothermal energy, perhaps half the buildings in the U.S. would now be heated and cooled at close to zero current expense. Another illustration of inflexibility was NASA's now retired space shuttle. There was the large initial cost, of course. Just as importantly, the world had only a few shuttles, and it took a long time to replace those that blew up. A launch regime relying on expendable rockets would not have been able to do everything the shuttle could, but it would have allowed for pay-as-you-go, and it would have been much easier to revamp when problems arose. If students in Science, Technology, and Society were asked to recommend strategies to promote job creation in technologically innovative industries, you probably would not want to do what New York State officials (and those in many other states) have done give large sums of money to businesses, coupled with reducing or eliminating taxes for companies that locate new plants in the region. That money is gone no matter whether the jobs materialize or not. If government instead helped businesses to meet their wage bills for each suitably created job, then payment would depend on how many people were eventually employed -- and payments could be adjusted every year if necessary. Subsidies could be increased to give greater incentives for new job creation, or decreased if unemployment became a less pressing problem. The greater flexibility of the


pay-as-you-learn approach is obvious, not because it is free of difficulties, but because it is relatively easier to change course without writing off huge sunk costs. Flexibility also can be enhanced in other ways. Phasing in a policy during a learning period is a common practice in business, for example, as is experimenting in a limited geographical area and/or for a delimited client base. Because McDonalds routinely uses this strategy, if the McRib or another new menu item fails to sell, the business does not lose very much. Another tactics for preserving flexibility is to simultaneously try two or more approaches. This was the tack pursued by U.S. physicists when they were developing a detonator for the first atomic bombs. Not knowing which approach would work, they worked on more than one technological trajectory in hopes that at least one would prove feasible. That method is so simple it is almost undignified to term it a strategy, yet it remains true that both ordinary people and leaders of major organizations often plunge ahead based on a single hope rather than hedging their bets. Another way of preserving flexibility is to use existing staff and facilities rather than constructing new buildings and creating a new, dedicated organization with permanent staff. If the RPI Arts Department had been given additional funding, instead of leaping to build the EMPAC facility with its forty new staff members, there would have been many opportunities to learn what kinds of electronic media programs would attract audiences, to learn what kinds of rooms were needed for which kinds of projects, and otherwise to build on existing strengths. The particular tactics by which flexibility can be achieved obviously vary greatly among innovation contexts, and various participants will find some tactics more advantageous to them than others. Those with influence often want to do something dramatic, of course, so they accidentally or deliberately sacrifice flexibility in the quest for big breakthroughs. NASA administrators chose a big and glamorous approach to the space shuttle partly because they needed enough votes in Congress to maintain funding


after the Moon expeditions had played out; the Space Shuttle appealed to elected officials because it was so complex that the work could be spread among businesses in many different congressional districts. So it would be unfair to depict every inflexible adventure as a mistake. Or at least it may be sensible in some respects, even if misguided in others. There also are times when innovations should be introduced rapidly across large geographical areas to prevent "learning" and adjustment by opponents. For example, the gradual approach to school desegregation in the U.S. allowed whites with resources to move to suburban districts, thus subverting the goals of the endeavor.

Barriers to Learning
The term trial-and-error learning makes it seem as if learning and error correction would be pretty much automatic. In fact, learning often is quite difficult, and on important matters it makes sense to prepare actively to encourage timely learning -- and corrective action based on it. How can organizations learn more swiftly, from their own experiences and from those of others dealing with similar decision problems? One element of it is to arrange for feedback from initial efforts to rapidly reach those with authority to make a change. To the contrary, feedback often takes too long, allowing accumulation of unfortunate results. For example, the harmful effects of DDT were not persuasively documented for a quarter century after the pesticide's initial use. It also took decades before there was clear evidence that high-rise public housing complexes have a destructive effect on many residents. Even after feedback emerges, those with authority do not necessarily act expeditiously. During the Vietnam War, it took several years and many deaths before Army leadership took steps to modify early versions of the M-16 rifle that were jamming far too often, leaving soldiers defenseless. The weapon was gradually modified and improved for example, by


coating the firing chamber with chrome so that ball powder would not adhere but the trial-and-error learning process was so slow that some soldiers actually sent home for cleaning equipment that the Army failed to issue. The most common response to slow learning is to rue the misfortune, but to consider it an immutable fact of life. That approach strikes me as unduly fatalistic. Given that there rarely is enough funding, time and attention, or other resources to tackle even all the pressing issues or proposals in a domain, it may be sensible to favor initiatives offering a potential for quick learning. Big technological endeavors obviously face constraints in this regard, and cannot possibly match the dexterity with which airlines now alter fares within days or even hours if travelers do not respond as expected in booking seats. But in almost every activity there are ways to speed up feedback, learning, and change if those with influence think hard enough and care enough. School reformers trying to improve learning in math and science (at which the U.S. ranks poorly in international comparisons), typically face a plethora of ills and a bewildering variety of partially contradictory remedies, with no prospect of knowing in advance how well a given proposal will work. Yet innovations in teaching of math and science do not ordinarily give priority to the potentially meritorious ideas whose results could be determined fairly quickly. Instead, big schemes such as No Child Left Behind are launched without a fallback option if the initial hopes are dashed as they usually are. Nor are decision makers in most other technical or nontechnical domains normally attentive to the problem of long-lagged learning, despite the obvious fact that error correction cannot be attempted until feedback emerges. Some regulatory endeavors do make efforts to speed up learning, so the strategy is not totally outside the realm of what is conceivable for flawed humans and their organizations. After numerous bad experiences from chemicals such as PCBs, vinyl chloride, and DDT, the Toxic Substances


Control Act of 1976 attempted to make sure that all new commercial chemicals would have to be tested by toxicologists prior to marketing. However, the law was badly written and poorly enforced by the U.S. Environmental Protection Agency. Observing the U.S. morass, the European Union recently has enacted the Registration, Evaluation, and Authorisation of Chemicals (REACH) program that finally seems to be doing what TSCA attempted a third of a century ago. Government regulators throughout the world have been relatively successful in requiring premarket testing for efficacy and safety before approving sale of new pharmaceuticals, and some types of medical devices now are also subject to such screening. One does not usually think of these requirements as part of an intelligent trial-and-error process; but the testing is a way of speeding up negative feedback instead of waiting for it to emerge naturally, over a longer period and with greater damage. Other ways of improving learning obviously might be imagined for other arenas of innovation.

Is it a Utopian hope to suppose that these and other strategies of coping with uncertainty might come to be employed somewhat systematically? Each of the elements of Intelligent Trial and Error actually is already being applied in various policy areas, though typically not in an explicit or coordinated way. Perhaps the most thorough application to date was in early research on recombinant DNA, the scientific procedures which led to the biotechnology industry. Scientists organized a voluntary moratorium on potentially risky research in the early 1970s, and worked out a regulatory strategy through the National Institutes of Health. Six classes of experiments were prohibited altogether, and precautions were adopted for the others, varying in stringency according to the degree of risk each type of experiment was believed to


pose. The aim was essentially to make the research forgiving of error: special laboratory facilities were used to prevent bacteria from escaping from the research building; and intentionally enfeebled strains of an especially well known bacterium were used for most of the research, so that even if bacteria escaped they would have great difficulty surviving outside the favorable conditions of the lab. Recombinant DNA researchers proceeded to learn from experience, partly via worst-case experiments aimed for example at finding out whether virulent new organisms might accidentally be created. There was some disagreement about interpretation of some tests, but the great majority of observers found reassurance from the prioritized testing. Close monitoring of hundreds of ordinary rDNA experiments also provided reassurance. As uncertainty was reduced, more experiments were allowed at lower levels of containment; most of the containment requirements eventually were dropped, and no experiments remained altogether prohibited. Failure to use enough such strategies characterized civilian nuclear power. Utility companies, reactor manufacturers, and regulatory agencies embraced potentially catastrophic safety and financial risks, with inadequate precautions. Learning was bound to be slow, with significant time lags before receipt of persuasive feedback. And the trials were so incredibly complex with up to 10 million pieces of paper for a single nuclear reactor -- that interpretation of errors was almost impossible. The industry was inflexible, in part because billions of dollars had to be expended before a reactor generated a single kilowatt of electricity. Inertia rather than learning accumulated, as it proved impossible to reframe an endeavor involving a host of supporting public and private institutions, including state and federal courts, uranium mining and processing, reactor vendors, utility companies,

regulatory agencies in every state, and a combination of government, business, and university R&D for reactor design, development, and radioactive waste handling.


It took several decades to find out that giant nuclear power plants would be politically and economically unacceptable in most nations, by which time hundreds had been constructed throughout the world. The error was irreversible, learning slow, and the cost enormous. Policy makers could have pursued much smaller reactors, using different designs that would have been less expensive, more flexible, and incapable of catastrophic meltdown. Similar problems can be found in most large-scale projects, from dams to jet fighters to faulty construction of the World Trade Center. The above problems and possibilities apply throughout technosocial life. First, given that it is undesirable to step over a cliff while learning from experience, it makes sense to protect against unacceptable risks where feasible. Second, because learning usually takes a while under the best of circumstances, it makes sense to structure innovations so that they can be modified or dropped fairly readily if negative feedback warrants. Third, because people and organizations do not automatically learn to do better, it makes sense to prepare deliberately for learning. Intelligent Trial and Error is not, however, an automatic process that specifies exactly what should be done in any given situation. And the required strategies may cost more initially, even if cheaper in the long run. How far to go in employing protective, learning-oriented strategies obviously requires negotiation and judgment, and cannot be reduced to any simple formula. It is a social choice, not a technoscientific one. Table 1 on the next page summarizes the strategies and tactics required for Intelligent Trial and Error, including those discussed above as well as several others to analyzed in later chapters.



1. Effective Deliberation a. Started early enough, while the technology was still highly malleable? b. Maximum feasible diversity of concerns debated? c. Well informed participants? d. Public decisions actually reached and implemented? 2. Fair Decision-Making Process a. Fair representation of those who might be affected? b. Highly transparent process? c. Burden of proof appropriately distributed? d. Authority to decide shared appropriately? 3. Prudence a. Sensible initial precautions (e.g., premarket testing) b. Erring on side of caution (e.g., redundant back-up systems)? c. Very gradual scale up? d. Substantial built-in flexibility (e.g., minimum dedicated infrastructure)? 4. Active Preparation for Learning from Experience a. Widespread recognition of the need for trial-and-error learning? b. Well-funded, multipartisan monitoring? c. Funds to ease resistance to error correction (e.g., victims compensation)? d. Strong incentives for error correction? 5. Appropriate Expertise a. Protections against widely shared conflicts of interest? b. Sophisticated, timely study/advice, including social and environmental aspects? c. Substantial advisory assistance to have-not partisans? d. Effective communication via mass media and other channels?



commercialized electronics, chemicals, transportation, and other technologies are shaped and deployed largely by business executives and their employees acting in concert with purchasers, as everyone knows. Businesses often do a splendid job at it ergonomically designed desk chairs at a fair price, iPads and apps, snowboards and other sports equipment, your favorite item. For other products, however, businesses do an unnecessarily risky, shoddy, or misleading job at developing and marketing their products. Or they fail to do what needs doing. How to encourage upon the good outcomes while reducing the frequency and severity of the not-so-good?

A Starting Point
Honda designs and manufactures automobiles that are regarded more favorably than those of many competitors. Even Honda could easily improve, however. For example, the company does not offer a hybrid or diesel version of most models, so the Odyssey minivan that is my familys main vehicle gets relatively poor gas mileage (~20 miles per gallon). There are no fog lights, moreover, and the front seats are not deep enough for my long legs to be comfortable on trips. The sound system is weak. Emergency handling is mediocre, and snow handling is so poor that I filed a complaint with the National Highway Traffic Safety Commission. I know that Honda engineers could have remedied these shortcomings at modest cost, because


other mid-priced cars are superior at one or several of the tasks. The alternatives all have other drawbacks, and we guessed (perhaps incorrectly) that the Honda would be most suitable overall for our purposes. But paying $35,000 for the best of a mediocre lot in 2013 is a bad deal: The car industry can do better, and I wonder how to get those in authority to do it. Appliances: My Electrolux refrigerator has never made ice properly, and the freezer temperature oscillates from 0 to 14 degrees Fahrenheit because the compressor does not run often enough. (At 14, ice cream is too soft.) Fortunately, I purchased an extended warranty; repair people have made eight trips, replacing five or six different pieces of electronics (thereby more than eliminating any profit the company could have made.) The companion Electrolux dishwasher blew a circuit board and pump after a year or so, and the model had already gone out of production so parts were almost impossible to obtain. Even when the dishwasher works, it does not clean as well as the 18-year-old version it replaced. Inexpensive floor lamps started to fall apart almost immediately, despite gentle treatment. The new Braun shaver's rechargeable battery shaves half as long as claimed, and it performs no better than its aged predecessor, despite two intervening decades of shaver "R&D." The Kitchen Aid stand mixer's selector handle is fabricated from a metal soft enough to bend when changing to a faster setting. The slow cooker's lid had three handles and attachments, one disintegrating from the heat and two coming unscrewed while washing one of which got stuck in the garbage disposal. The manufacturer had no replacement parts -- but did send a brand new replacement when I complained (again eliminating their profit). These problems with my upper-middle class stuff are puny compared with the problems facing those who do not have jobs, or who do not have the basics needed for a decent life. But if manufacturers do not design and build simple items very well, what are the chances that they will do a fine job at more complex and more important activities? No one knows just how well businesses eventually could perform, and it seems unreasonable to expect that every product will work perfectly (or that other economic outcomes


could become uniformly excellent). Fortunately, a process of improvement could be launched without knowing the eventual level of achievement. At a high level of generality, it is easy to specify what needs to change: An improved economic system would induce business executives to combine their quest for near-term profits with better service to customers, employees, and the public more generally. Of course there is room to disagree regarding what constitutes "better service." In addition to products that work as users desire, I have in mind reduced toxicities and greater safety, reduced air and water pollution, more focus on preventive health than on curative medicine, better jobs and higher quality of work life. The list could be extended, and no doubt readers will agree with some of those goals and disagree with others. That we have different priorities actually does not matter, for everyone has one problem in common -- and I use the word "everyone" advisedly, because I think it applies to all people now alive and all who will ever be born into a civilization anything like the one now prevailing. Our common problem is this: How to motivate business executives to be sufficiently responsive that each of us can get our needs met almost all the time. As consumers, that involves reasonable prices, durability, and safety. As workers, it means being able to find appropriate employment, fair treatment on the job, and opportunities for advancement rather than being stuck in dead-end work. As citizens, it means full employment so that taxpaying is widely shared and so there is leisure time to serve as Little League coaches, attend occasional meetings of the school board, and otherwise contribute to civic life. As President Teddy Roosevelt once expressed it, no one can be a good citizen without a wage more than sufficient to cover the bare cost of living. I realize that some readers will doubt that significant improvements in business and economy are possible. If you are in that camp, you may wish to reformulate your doubts into a social design challenge: Given that business executives are incentivized for sales and profits, how might those lures be rearranged to induce design, manufacture, and marketing of a mix


of goods, services, and jobs superior to those now prevailing? And how can serving customers in the short term become more compatible with what is desirable over the long term including for people who are not customers? That obviously is a substantial set of challenges, and this chapter proposes two innovations that might help.

The Privileged Position of Business

Before turning to the proposed economic innovations, it is worth getting clearer about the rationale for them. The task of making technological choices is largely delegated to business executives, who often use their discretion in ways that would be controversial if enough people knew enough and thought enough about what is going on. High-level executives exercise substantial discretion over technological innovation: Which new products to create? Considerable funding and effort are devoted to arguably trivial products, and to dangerous ones. For example, chemical industry executives and their R&D staffs developed chlorinated chemicals such as DDT and PCBs -partly to profitably dispose of the chlorine produced as an unwanted byproduct in other manufacturing processes. Which innovations not to develop? Automotive executives chose not to commercialize hybrid or all-electric cars for nearly a century after proven feasibility. How durable, how repairable, how reliable? How well suited for their tasks? 2013 VW Jettas and Honda Civics are inferior to 2010 models in important respects. How products are put together technologically? Consumers at most choose an outcome, not the process required to reach it. For more than half a century after the invention of nylon, little research was done on how to recycle the textile and millions of pounds still end up in landfills every year. Where to locate manufacturing plants and other large facilities, and how often to update plant and equipment?


The list could be extended, but the point is obvious: A relative handful of (mostly) men exercise substantial influence over what goes on in technological civilization. One result: In the past century more than 80,000 chemicals have been dispersed worldwide, 100 million pounds or more for some of the highest production compounds, with persistent toxics entering the tissues of most living creatures. Although buyers of course contributed by responding enthusiastically to the products offered, toxics could not have become an environmental horror story without the initiatives taken using corporate executives discretionary authority. In a less dramatic way, essentially the same phenomenon has occurred in every technological arena, with an estimated one million products now for sale around the globe. How might business executives (together with their technoscientific employees and their customers) be encouraged to innovate in more public-regarding ways? What would it take to accelerate movement toward a greener chemicals industry and toward making better use of other technologies malleabilities?

CEO Pay and Accountability

Decades or generations of research, public debate, and experimentation probably will be necessary to evolve a more intelligent form of corporate/market system. So I have no illusion that the ideas presented here are definitive. But one needs a starting point when trying to think anew, and I have one to try out on you. To restate the goal: I am looking for some mechanism for inducing the most important business decision makers to exercise their discretion over technological innovation in ways that are better for environment and for other public values such as job creation. Socially improved innovation would require that corporate executives create a better ratio of public service to profitability. If it makes little sense for CEOs and their businesses to be rewarded for behaving in ways antithetical to what makes sense for others, then the trick is to find an incentive arrangement that will effectively revise the strange contract that now encourages socially perverse behaviors so long as they are profitable.


One interesting option would be to legislate a change in how high-level executives of the most important businesses are paid. Let them be hired and fired just as they are now: Company unprofitable, stock price falling as has been true for many years for internet company Yahoo? the board of directors may fire the old CEO, and bring in someone new. That makes as much sense as any mechanism so far proposed for choosing top executives. However, it is not at all clear that CEOs also should be paid in a manner set by their boards (and heavily influenced by the executives themselves). At present, a powerful combination of two incentives (job tenure and pay) pretty well guarantees that CEOs will prioritize corporate growth and profitability rather than public service when the goals conflict. Indeed, there presently is little reason to expect corporate leaders to try very hard to figure out how to make the goals more compatible. On reflection, then, it is hard to miss the possibility that CEOs might work harder for public goals if they could earn large salaries and bonuses only to the extent that their company improves in serving public purposes. I recognize that the idea seems far-fetched on first inspection, but consider how it might be designed to work. Suppose the maximum annual bonus for CEOs of the largest chemical companies were raised to $30 million apiece -- roughly five times their present average. The CEOs might be inclined to go along, don't you think? One or more of them could earn the maximum, and others could earn lesser amounts, based on how well each performs in competing to move their companies toward chemical greening, product durability, or any other outcomes that enough people want. How much of a pay differential would be required to motivate the best possible performance? Ought the comparisons be done on an overall basis, or divided into categories such as consumer products, pesticides, and chemical intermediates? Should the scheme be extended beyond products and pollution to additional aspects of company performance such as employee morale and job creation? Such questions about a revised incentive and accountability system would have to be answered by some combination of learning from experience and negotiation.


A new approach to CEO pay obviously would require new ways of overseeing business performance, presumably entailing new metrics and a complex accounting system for tracking many facets of corporate social responsibility. There might be governed by an expert board equivalent to the U.S. Federal Reserve, or the task might better be pursued as a politicized undertaking one drawing, for example, on environmental interest groups, consumer advocates, and other outside "watchdogs" as part of the governing board. Fortunately, many organizations already collect and analyze some of the types of information on which new system would need to draw. The U.S. General Accountability Office is an investigative arm of Congress charged with examining matters relating to the receipt and payment of public funds. It monitors not only financial matters but also program accomplishments of governmental agencies, and many of the methods could be carried over to the CEO pay/accountability system. The Environmental Defense Fund has an elaborate web-based corporate scorecard based on the Toxic Release Inventory, the Commerce Department records detailed information about business activity, and many other government departments either already collect relevant information or could fairly easily do so. Analysts at mutual funds and securities firms continually monitor managerial performance at companies whose stocks they are recommending; and many other professionals study, rank, and assess myriad other aspects of corporate behavior. Indeed, collection and analysis of economic data is a billion dollar industry. No one is in a better position to figure out how to combine public purposes with private profitability than the CEO, and no one is in a better position to motivate lower-level executives and employees to work for public goals. Exactly where to start does not much matter, because it would take a generation or more to learn how to effectively operate a new corporate incentive system. The U.S. could begin with the Fortune 500 CEOs, or with whatever other set of major corporations enough people could agree on, and then expand/improve from there as experience reveals the shortcomings and strengths of the idea. Switzerland, Finland, and New Zealand might be even


better places to start, because of the smaller number of highly significant executives who would need to be tracked/incentivized.

Improving What Is For Sale

The CEO-pay proposal is oriented primarily to weaknesses in how goods and services are produced. A second core weakness in technological steering concerns what is produced and marketed, and this problem may require a different approach. Just as one would not trust a patient to choose the right antibiotic for an infection, it is not clear how far to trust individual consumers to pick the right item in other situations. For example, many computer users do not have appropriately ergonomic desk/chair/keyboard trays; at least a billion extra dollars are spent annually on shampoos and cleaners that product testers find work no better than inexpensive alternatives; and obesity caused partly by nutritional and leisure choices has become a worldwide disease afflicting 1.2 billion persons. Readers can easily supply their own favorite list of poor consumer choices. The persons directly involved are not the only ones who suffer, because their choices tend to have broader public consequences. Pedestrians, bicyclists, passengers, and other drivers may be killed or maimed by cars with mediocre tires, and even in one-person accidents the costs of vehicle repair and medical care are shared via insurance. The cotton clothing now dominant worldwide is worse than most alternative textiles in terms of soil erosion, pesticide use, and in energy required for drying wet clothes and probably no more than one shopper in a thousand knows it. Each used car maintained poorly and junked sooner than necessary translates into substantial energy and environmental consequences. Overall, it is harder to think of consumer choices that truly involve only the user than to list choices with public consequences. Thus, it is easy to build a case against unrestrained consumer sovereignty. What might be done? I believe it may be worth looking into institutional innovations that could achieve more competent ordering of goods and services from suppliers. Suppose for purposes of illustration that it makes


sense to have less cotton in textiles for the reasons mentioned above, which would require that a higher fraction of textiles be made from polyester, wool, and linen. How might such a shift be effectuated? One modest modification of ordinary economic life might go a long way: Instead of clothing manufacturers selling to wholesalers, who then sell to retailers, what if the intermediate wholesaling step were taken over by quasi-public organizations? Call them Democratic Wholesaling organizations until someone comes up with a better term. If appropriately mandated, such organizations could be concerned with public issues such as energy and environment in ways that ordinary businesses cannot or do not. 1) How to get manufacturers to go along with the shift? 2) How to get consumers to purchase the new mix of clothing? 3) How to pay for the changes? First, how could Democratic Wholesalers nudge farmers to grow less cotton and more corn, induce textile manufacturers to make more cloth from polyester and less from cotton, and induce clothing manufacturers likewise to play their part in the transition? Dead simple: Gradually reduce wholesale orders for cotton clothing, while increasing orders for alternatives. This could be done at the level of raw cotton and alternative fibers; or it could be done by reducing orders at factories that make denim and other cotton cloth; or it could be done at the step where pants and shirts actually are sewn. There would be unintended consequences to any such intervention, and those participating in the new system would have to gradually learn from experience. But this one fact is known as surely as a person can know anything: For all its shortcomings, the great beauty of a market-oriented economy is that the quest for sales induces farmers, factory managers, retailers, and others to adapt their behaviors in the quest for sales. Unlike government where ideology and bureaucracy can maintain the status quo decade after decade (e.g., the war on drugs that leads to millions imprisoned with little effect on drug supplies), in economic life change often comes very swiftly if sellers see new opportunities to make money.


But how to convince retailers to actually agree to sell less cotton clothing? There are two simple methods. First, raise the price charged to retailers for cotton clothing, and drop the price for fleece, wool, and other cotton alternatives. How much of a price change will induce how much of a shift in retailers ordering cannot be known in advance of trial-and-error experimentation. Second, given that Democratic Wholesalers have cut back on orders for cotton clothing, manufacturers will be unable to supply retailers requests for quantities above that amount. Faced with stocking non-cotton clothing or leaving a third of the store empty, what would you expect Fossil managers to do? What about customers, wont their tastes for cotton make them resistant to alternatives? Again, this is a question ultimately answerable only on the basis of experience. However, consumers tastes are shaped by what they see others wearing, by advertising, by price, by what they find on store shelves, and by other contingencies. Over time, almost everyone modifies clothing choices. To induce a more rapid than natural change, however, Democratic Wholesalers could join manufacturers and retailers in some relatively above-board marketing tactics. Instead or in addition would be price inducements: If the price of cotton clothing is higher than the alternatives, many buyers will move toward the less expensive items. Finally, who would pay for all these changes? Wholesalers presently are profit-making businesses, of course, and, if they get competent management, the Democratic Wholesalers likewise would enjoy a stream of revenue from retail customers sufficient to cover expenses and more. If a public-regarding product costs more to manufacture, how might Democratic Wholesaling deal with the higher cost? An instance of this arose in a little-known controversy over chemical brightening agents in detergents: Procter and Gamble and other major detergent manufacturers declined to use a new water-soluble, biodegradable polymer that Rohm and Haas Chemical Company spent seven years developing, because the innovative ingredient cost twice as much as the less environmentally friendly polymer conventionally utilized.


That sounds like an insuperable obstacle after all, how many people would purchase an environmentally friendly car that cost twice as much? In this case, however, the number is deceiving; there is so little brightening agent in laundry detergent that doubling the cost would bring a $4 box or bottle of laundry detergent to about $4.01. Admittedly, that could add up to a million dollars in lost profits, some accountant has figured, and the soap industry therefore did not make the environmentally friendly move. Democratic Wholesaling offers an easy way to deal with the issue: Place orders only for detergents with biodegradable brightening agents instead of the penny-cheaper, non-biodegradable polymer! Pass the cost on to retailers, who presumably will pass the cost on to consumers, who will pay the higher price just as they now pay for other price increases. If no orders are placed with manufacturers who fail to use the biodegradable whitener, then all detergent products on retail shelves will contain the environmentally friendlier ingredient. An alternative possibility is to squeeze manufacturers profit margins; P&G is an enormously profitable enterprise, and it is likely that executives would prefer to cut profits rather than lose shelf space to a competitor. Democratic Wholesaling could easily test just such a possibility, by gradually beginning to shift detergent business to Lever or another manufacturer willing to provide detergents with environmentally responsible ingredients at the same wholesale cost as the older, less environmentally friendly detergents. The same sort of arrangements obviously could be applied to more important aspects of economic life: fuel-efficient small cars with large, comfortable seats; tires that corner better under wet conditions than most tires now on the road; and to essentially any other item of commerce. If it is not ordered via the Democratic Wholesaler, it cannot appear on retail shelves; and if it is not on the shelf (or mail order website), it will not be purchased. The shift is utterly simple in principle; no doubt it would be more complicated in practice.


Given that technologies are highly malleable, at least early on, and given that the particular forms technologies take depends on social influences, the way appears open for a very different repertoire of technologies than those presently dominant -- an alternative modernity, as Andrew Feenberg puts it. Or, more accurately, in principle it seems conceivable to construct future innovations and to reconstruct existing technologies to be safer, less environmentally destructive, or otherwise better. I have suggested two methods of rebalancing public concerns with business-consumer liberties because the existing system is producing some outcomes I consider indefensible (along with some very good outcomes). I also propose bringing economic life under more effective public control because I believe that citizens deserve to be represented in decisions that affect them. Moreover, the potential intelligence of democracy can be actualized only by considering a wide variety of ideas, interests, and angles on problems and this essentially never happens when a small number of insiders control decision making. Given that technology is a form of legislation, acting on those principles would mean creating mechanisms for representing citizens in technological choice to a greater and more deliberate extent than is possible under existing methods of technological-economic governance. Delegating many decisions to business executives and consumers makes good sense, but hasnt the economic-technological world gone too far in that direction if innovators and their customers have little incentive to consider and adjust broader effects as they reshape everyones world? Changing CEO pay and/or instituting Democratic Wholesaling may not be the best ways to attack the problem, and there certainly is need for refinement and for additional proposals. What does not make sense, it seems to me, is to let a seriously flawed status quo go on and on while waiting in vain for perfect proposals. There is no such thing.



Business executives and their employees, together with consumers, help shape everyday life in a technological civilization. For steering technology, the business-consumer system rivals or exceeds in influence the electoralgovernmental system normally supposed to govern a democratic political order. Everyone knows that market buying and selling makes valuable contributions -- encouraging innovation, putting downward pressure on prices, stimulating variety. It is equally obvious, however, that R&D staffs, entrepreneurial managers, and willing customers simultaneously create new problems such as terrible working conditions at FoxConn in China and even at some workplaces in the U.S. A full accounting of negative effects has never been attempted, but one certainly would have to add abandoned toxic waste dumps, deserted strip malls, perhaps a hundred thousand unnecessary or redundant products, and a significant variety of other side effects that accompany the quest for new products and customers. Environmental organizations, health researchers, government officials, and ordinary citizens then have to play catch-up, trying to correct or compensate. Governments do not do especially well at this; inherently a difficult task, catch-up is even harder because of the privileged position of business, which includes superior funding, expertise, organization, and access to government officials. Business activists and their allies, in other


words, are well positioned both to create environmental/social problems and to slow down or thwart attempts by Congress, the Environmental Protection Agency, and the Federal Trade Commission to solve those problems. The situation is less vicious in most other affluent countries, but has the same basic tendencies. To the extent that the business sector and its customers misgovern the steering of technologies, how might governance be improved? Over many years, citizens have been told that their own jobs and prosperity would be jeopardized by "anti-business" policies, such as higher taxes on business. Over the past year, financial newspapers have run stories almost daily about business executives being reluctant to hire new employees because of various uncertainties. Likewise common are claims that businesses cannot afford the expenses of making mines and factories safer: Some of the threats are empty ones, but for decades the United Mine workers did not dare press for enforcement of safety legislation for fear that jobs really would be endangered. Also cautioned against have been full compensation to employees injured on the job, regulation of misleading advertising, tighter controls over consumer fraud, strict liability for externalities such as pollution and other negative social effects of innovation. Business executives gradually have yielded on many of these points. Even in 2013, however, they continue to enjoy what could be considered extraordinary tax favors; workplaces are less safe than they easily could be, as are consumer products. Despite stricter labeling requirements and other truth-in-advertising laws, it remains legal to misrepresent products. Business activities abetted by consumer purchasing still pollute air, water, soil, and living beings even after decades of tightening controls. And salaries and bonuses going to the highest-paid executives are as much as a thousand times the minimum wage (with inequality in the U.S. much greater than in Japan, Europe, and most other regions). How might one think about revising market-oriented economies to preserve valuable features while repairing deficiencies? A starting point is to recognize that no one knows how it can be achieved; so gradual


experimentation and learning by doing are sure to be necessary. It is apparent, however, that government officials have created numerous environmental, worker safety, and other regulations that formerly did not exist. And it is obvious that some nations have more effective controls than others business is rather tightly regulated in Denmark, much less so in Russia. If improved relations between business, worker, consumer, and citizen can be arranged in some countries, the barriers may not emanate from "capitalism" or other grand phenomena, but from smaller, more manageable sources -- and, hence, it is at least conceivable that renegotiation of the business-consumer system could occur over a period of years and decades.

Taxes, Tax Deductions, and Subsidies

What new strategies might be worth exploring? First, taxes and tax deductions/credits could be used more creatively. Taxes already are widely used to stimulate business investment, raise funds to pay poultry inspectors, and otherwise facilitate or dampen activities by businesses and consumers. When the Environmental Protection Agency issues a new regulation, it routinely is challenged in court by the chemical industrys professional association or by individual companies -- and, strangely enough, the government pays part of their expenses. The reason is that all expenses incurred by business organizations for attorneys, environmental scientists, secretaries, accountants presently are tax deductible. If legal expenses instead had to come partly or entirely out of corporate profits, business executives would be less willing to go to court. That would mean that staff members at the Environmental Protection Agency and other government organizations would have to expend a smaller proportion of their scarce time and energy going to court, preparing to go to court, and being paralyzed by superiors worrying about legal challenges. Hence they could spend more time actually figuring out what regulations make sense, and enforcing those regulations. Essentially the same argument would apply to lobbying expenses; there are approximately twice as many businessoriented lobbying organizations with offices in Washington as all labor,


consumer, environment, health, and other interest groups combined. The fact that mounting all that effort is essentially paid for by consumers and taxpayers certainly doesn't discourage the overwhelming presence of the business sector in the nation's capital. Another category possibly ripe for reform is a shift in the way governments levy taxes not to increase total taxation, but to use taxation to encourage certain activities and to discourage others. Many economists recommend taxation (or subsidy) to raise (or lower) prices to better reflect the full social and environmental costs of products. A tiny example that makes the point: When one person's car emits smog-producing nitrogen oxides in Los Angeles and a different person suffers lung problems from the smog and goes to a doctor, the driver is not paying the full costs of the driving. Several European governments have initiated environmental taxation, and it appears to be having some desirable effects in modifying production and purchasing patterns. Instead of substituting centrally determined prices as did Soviet-communist economic planners (which induces distortions in entire industries choices of factor inputs and production outputs), environmental taxes start from market-derived pricing and merely modify obvious shortcomings. For example, to reduce climate-changing carbon dioxide releases, high-carbon fuels may be taxed and low-carbon fuels subsidized, with no net cost to taxpayers. The same principle could be applied to toxic chemicals via a Chemicals Trust Fund analogous to the Highway Trust Fund that now collects excise taxes on each gallon of gasoline sold in the U.S.: Tax the sale of toxic substances on a gradually escalating scale over the next generation, and use the funds to subsidize industry, researchers, and consumers for converting to less- or nontoxic alternatives. An altogether different reform idea would be to strengthen other social interests so they could negotiate more effectively with business. What if a new tax were placed on large businesses and the proceeds used to subsidize consumer-protection activities such as those of Ralph Nader or Consumers Union, anti-poverty organizations, environmental groups such as the Natural Resources Defense Council, and good-government organizations


like Common Cause? The tax rate might be increased every few years until it became obvious that the contest between business and other social interests had become more equal. Or suppose that charitable donations to consumer and environmental groups were matched 1-to-1, or 2-to-1, or 1-to-2 by funds from ordinary tax revenue. At some level of funding, other interest groups would command sufficient resources to mount more of a real political battle with business. (By no means do I intend to say that business should always lose in legal and political battles -- I am interested only in a level playing field so that all relevant social interests have a more or less equal opportunity to make their cases.)

Workplace Democracy
Another possible reform could come from greater democracy in the workplace. Could corporate executives ever get used to sharing authority with workers in operating businesses? Would that help keep bosses from running amok with dubious plans of the sort pilloried in Dilbert, and would it help enroll lower-level workers in thinking of their workplace as their workplace? Such a reform would not necessarily end up benefiting customers and members of the public, for many workers might rather band together with management to sell, sell, sell in order to boost their own wages and benefits. However, there could be times when workers might function as "representatives" of people otherwise excluded from a business firm's technological-economic decision making. For example, with regard to noise, air pollution, and traffic, some workers who live locally might align with other local residents in an alliance to influence management. And if it were known that workers had a share in decision making, environmental organizations or other outside groups might ask workers to speak on their behalf in other aspects of the firm's decision making. Even if only a minority of workers were responsive, having some internal representation is probably better than none.


This is not entirely a theoretical idea. In Sweden and other Scandinavian countries, "co-determination" laws give workers certain guarantees. Some pertain to job security, guaranteed vacation, and other personal matters; but workers often obtain influential roles in shaping workplace technologies so as to assist them in their work rather than replacing or deskilling them. This is said by advocates of workplace democracy to have the effect of empowering workers while also raising productivity (meaning more production per hour, and hence lower costs). However, France has high productivity without much workplace democracy, so it is not entirely clear what is causing what. In the Mondragon region of Spain, a Catholic priest more than 50 years ago helped local residents to organize cooperatively owned businesses, of which the most important was a bank. Once there was a source for loans, additional businesses could be started more easily, ultimately totaling more than 50 distinct businesses. These were not just little bookstores or bakeries; the largest was a refrigerator manufacturing plant employing three thousand. There has been some deterioration over the years, but thousands of Spanish workers still have the right to vote to hire and fire their bosses -and one wonders why that simple idea has not become better known and more widely tried. Workplace democracy has drawbacks, but so does the mainstream system (and every imaginable system of organizing feisty humans). Many different job roles are needed by even a moderately complex business, and it sometimes is crucial to have a few executives empowered and trained to make quick decisions. But how did billions of people come to assume that a top-down hierarchy is the only way to achieve that, the only way to organize a workplace? Doesn't it pretty much have to be a manifestation of legacy thinking for almost everyone to automatically suppose that executives have to hire workers instead of the other way around? There are worker cooperatives in the U.S., of which the largest concentration historically has been in the sawmills and other woodworking industries of the Pacific northwest and north central states. But nothing on


the scale of Mondragon has arisen, and most American workers have little recourse when management decides to replace skilled workers with robots or close facilities and move production to other countries. General Motors and many other large businesses have been doing this for several decades, to such an extent that almost all the new jobs created in the past decade in the U.S. have been in relatively small businesses or in schools, medical facilities, and federal/state/local governments. An ad for automated baking equipment in an industry publication epitomized this state of affairs. It showed a gigantic room for industrial baking, the room as dark as practicable for a photo, with no persons visible. The ad centered around the slogan "Lights Out Baking" -- meaning that the equipment provider was offering to create a facility with no humans involved, just automated machinery that didn't need light. Few plants have gone that far, though Cheez-Its are made on a machine the size of a football field that only requires a handful of attendants. More generally, a trend toward fewer employees per cracker or car has been in place for a century, with no end in sight. Considering the human toll taken by unemployment and underemployment, and considering that young people 18-25 in Spain had an unemployment rate around 50 percent in 2012 (a harbinger of the future in other countries?), is it wise to leave choices about workplace technology and hiring in the hands of a few managers in each business? If that seems questionable, how might the broader social consequences of technological automation and other workplace decisions be made subject to a broader set of participants without ruining the efficiency of the business? Should worker participation in decision making be mandated by law? Or are there better ways of approaching the risks of technological unemployment and other workplace issues? I think those are tough questions, and it makes sense to study and experiment rather than jumping to long-range plans and conclusions. But it seems rather odd that citizens of a democratic society would remain complacent -- indeed, complicit -- in allowing authoritarian governance of


their workplaces. Even if it is unclear exactly what should be done, it is easy enough wonder, complain, discuss, and experiment with various ways of sharing authority in the workplace. Beyond whatever the immediate benefits might be for currently disenfranchised workers and others affected by what amounts in many cases to almost a managerial dictatorship, there is a broader public issue: What are the chances that working life (and school before it) can be lived in an authoritarian manner, yet people magically transform into interested, active participants when they leave work/school and enter as citizens into community and political realms? Workplace authoritarian practices surely spill over into the rest of social life, just as environmental pollution from the workplace is not confined there.

Auctioning the right to innovate

The changes discussed above do not really reach the issue of the number of innovations undertaken in a year or decade, which leaves open the possibility that however meritorious the specific innovations that their total number could swamp the capacity of journalists, social scientists, government officials, and ordinary citizens to comprehend what is happening. Moreover, the reforms proposed thus far do not provide systematic ways of funding compensation for victims, environmental cleanup, or other corrective efforts. And there is a third problem. When an entrepreneur notices a potential niche in a market and develops a product that will meet the "need," the market system is succeeding -- in a sense. But this system does not discriminate between an innovation that cures the eye disease retinitis pigmentosa and an innovation that adds to store shelves a strawberry breath mint with a liquid center. So much seems to be happening so rapidly that the clichd notion, "You can't stop progress" seems to have some merit. How might observers better distinguish potentially worrisome from innocuous innovations?


To guard against the above difficulties, a different sort of economic innovation is worth considering: Auction the right to innovate. Set a quota on major innovations by large firms, then let them bid for the limited number of licenses. If managers believe their new product will win more customers and profits, they logically should be willing to bid a portion of the projected profits to gain a license to innovate. Conversely, if a manufacturer expects low profits and is unwilling to bid high enough to win a license, that probably means that would-be buyers are not terribly enthusiastic; and the lack of enthusiasm, in turn, probably indicates that the new machine or product does not fill a pressing need. To guard against worthy innovations being stymied, procedures could be established for appeal to a regulatory agency to grant exemptions for a limited number of valuable innovations that do not make it through the auction for several years. Exemptions might be crafted as well for certain high-priority categories of innovation. And, as hinted at above, it probably would be wise to exempt from the auction system businesses too small to have the funds and staff required to prepare for auctions or pay for licenses. Key advantages of the auction mechanism are:
(a) It raises funds needed to pay for closer monitoring of innovations, compensation for those victimized, and correction of unanticipated problems; (b) It is an automatic priority-setting process not requiring thousands of bureaucrats sifting through applications; and (c) It leaves would-be innovators free to decide when and if to press ahead.

Clearly there would be innovations lost by this process; but is technological society in greater trouble because of a shortage of innovations, or because of lack of attention to those that do occur? Reducing the sheer number of innovations might actually be a net gain, or at least not much of a loss. Larger innovations meanwhile would automatically be called to


governmental and media attention by the size of the bid, and scarce analytic resources of journalists, social scientists, public interest groups, and government regulatory agencies thereby could be targeted selectively to the major innovations likely to pose major risks. Funds from the auction -- huge amounts of funds, potentially -- would be available to support the inquiry.

These ideas are merely illustrative, a small sample of those that might emerge in a generation of sustained study, debate, and experimentation. The proposals are politically infeasible at present, and some might be ill advised. But a great many regulations introduced over the past several centuries have managed to reshape trade without much damage to business profitability or creative innovation; and if one believes that the effects of business privileges on technological steering are quite pernicious, then it could be logical to experiment carefully with further efforts to structure business executives behavior. In summary, the privileged position of business puts sharp constraints on the intelligent democratic governance of technology. No one knows exactly what should be done, but a considerable variety of strategies are available that might reduce the problems caused by the business-consumer system while retaining most of the advantages of low prices and freedom of choice that market buying/selling provides. In the twentieth century, most governments cautiously experimented with restraints and inducements for businesses and consumers; in the 21st century, bolder initiatives may prove feasible if enough people come to understand the nature of the problem and the potential availability of solutions.



In chapters 7 and 8, I nominated some economic reforms for your consideration because governance occurs not just through government but also through economic decisions (and via other social institutions). Still, as possible routes toward improved social shaping of technologies, nothing beats governments: They have the tax money, the legitimate authority conveyed via contested elections, the capacity to criminalize antisocial business activities, and other enforcement mechanisms that ordinary citizens, consumers, environmental organizations, and business corporations mostly lack. Hardly anyone directly opposes better government, but not many actively seek it either. Teachers, political candidates, television commentators, and even one's peers implicitly promote ignorance rather than enlightenment about the prospects for improving governmental steering of technologies. This is partly because most know so little about government, or about technoscience, or about both; but it is also because few people understand or believe that there really could be such a thing as deliberate political innovation. Yet innovations in government almost certainly would be required for better steering of technological innovation, because a backward political system cannot hope to govern phenomena as complex and fast moving as contemporary technoscience. Just because something is needed does not make it happen, of course. Few people get as much loving kindness as they might enjoy or need; if better government is that sort of need, if achieving it would require angelic people, then humanity is out of luck. Is it? How would one know whether governments potentially could innovate to take better advantage of the intelligence of democracy?


Given the levels of (mostly fruitless) dispute about politics and government, a reader could be forgiven for doubting the two implicit promises in the title of this chapter pertaining to innovation and pertaining to intelligence. As long as the word potential is there, however, I assure you that there are promising avenues that deserve more patient exploration than usually is given. The reason is a simple extension of the well-known clich, Two heads are better than one, because decisions become less unintelligent as fewer important considerations are neglected. Because everyone has biases, and because those holding elected and appointed governmental positions especially have systematic blinders, well-rounded consideration of problems and solutions requires interaction among affected interests. The wider the consultation and the more that authority is shared with those who have needs and insights bearing on the issues under consideration, the less likely that insiders can impose an unintelligent course of action. (See James Surowiecki, The Wisdom of Crowds: Why the Many Are Smarter Than the Few, 2004.) Reconciling divergent claimants' needs and beliefs is the simple underlying basis for why democracy is not just a system of rights and liberties, but actually is or could become a system for making intelligent decisions. It is more of a goal than an accomplishment, of course. What one normally refers to as a democracy these days is a Bizarro version of Superman. Well, not that bad: As the famous British Prime Minister Winston Churchill said, Democracy is the worst form of government, except for all the others. A well educated, thoughtful, eloquent, highly skilled political leader, Churchill nevertheless was molded by his times and by the legacy thinking that he inherited (as do we all); in many respects, therefore, this 20th-century hero of Britain's fight against Hitler was almost as ignorant as the modal citizen in not realizing that the democracies now in existence are pale imitations of what might someday be possible. How to arrange for much wider sharing of influence in deliberations and decisions, and how to induce government officials to have the necessary competence and


motivation to play their roles in regulating, taxing, subsidizing, and otherwise steering technoscientific innovation and utilization? Even the experts are far from understanding how to deliberately design a technological civilization that can escape what Winner calls technological somnambulism; so it would not make sense to worry now about all the specifics and all the inevitable problems that would have to taken into account by new arrangements for governing. All of that will have to be worked out as things proceed if, indeed, enough people ever do become motivated to move democracies to a more advanced level. If demand for more sophisticated government is to stand much chance of emerging, people need at least glimpses of what might be possible; they need hope, and they need ideas about potentially feasible reforms to existing political institutions. Developing such visions requires setting the status quo aside temporarily, emulating architects and other designers in trying to sketch the main outlines of hypothetical new structures that could fill in gaps or even replace certain facets of contemporary government. This chapter inquires into several changes in the political realm that I believe deserve consideration and experimentation. THINKING ABOUT DEMOCRACY First, I want to briefly review and comment on the range of possible meanings of the term democracy, so you will see that present arrangements occupy just a few nodes in what may be a larger universe of possibilities. (On the next page of the chapter is a table listing a few of the alternatives.) The intention is to help break you free of a tendency to think of U.S. practices as constituting the one, true, best form of democracy. Oddly enough, even in 2013 after nearly a quarter millennium of democracy practiced in the U.S., most people have no more than the faintest idea that there are many different ways to organize democratic politics. This is all the stranger given that billions watched the wedding of Prince William in 2011, so they must know that Great Britain has a monarchy; and many realize that


the nation has a prime minister and parliament, with an upper house (The House of Lords) that is not elected. How many appreciate the fact that the United Kingdom (and many other countries) hold elections not on a fixed schedule but whenever the prime minister lacks sufficient support to win important parliamentary votes on proposed legislation? How many know that British electoral campaigns last for weeks rather than years? When the U.S. government stalemates, as between President Obama and the House Republicans, wouldnt it be great to call a new election instead of waiting, and waiting, and waiting in hopes that the politicians will stop dithering and actually work something out? Sometimes there are genuine stalemates, when the present crop of elected officials simply cannot govern any longer; but the U.S. has no solution except to wait for the next election. Very inefficient, surely outdated. If you surveyed the world's existing political systems, you would find many other ideas for improvement. For example, anyone who has complained about low voter turnout in the U.S. (less than 50 percent in many elections, especially local ones and especially among 18-21 year olds) might want to consider the Australian plan. That country achieves 90%+ voter turnout by requiring that non-voters pay monetary fines. I might prefer it the reverse way: Pay $100 to everyone who does vote which would have the effect of immediately boosting the economic well being of poor people, who now vote at lower rates. And I might consider requiring voters to take a simple test before being allowed to vote, inasmuch as a majority actually cannot name a single thing that their Representative has done in the past two years. I doubt it does citizens a favor to allow a continuation of widespread ignorance and widespread non-voting, with the U.S. worse than many. These introductory paragraphs are intended just to warm you up to the idea that there are many different forms that democracy can take, and the very best ones may not yet have been tried out. Just as you would not want to wear shoes designed 200+ years ago, why would you want to be governed



Participatory Democracy: Each person speaks for herself. Conspicuous success: Would allow more participation in more settings more of the time. Major shortcoming: Better suited for small localities than for national or global governance, with the New England town meeting as an exemplar. (See Richard Sclove, Democracy and Technology, 1995) Representative Democracy (sometimes called liberal democracy): Citizens vote for representatives, but exercise no direct control over policy outcomes. Conspicuous successes: Peaceful transfers of governing authority; legislators often are forced to abandon or weaken wrong-headed schemes because they cannot win majority assent. Major shortcomings: Have-nots are under-represented; weak incentives for actually reducing or heading off social problems; connections between citizens and representatives are tenuous. Associative Democracy: Citizen-to-citizen cooperation in pursuit of common purposes, making governance less dependent on large bureaucratic corporations and governments. Think soccer clubs, environmental organizations, and churches, but covering many more realms of everyday life, with many more participants, and with paid staffs accountable to ordinary members. Conspicuous success: Toquevilles America circa 1840 -- volunteer fire departments, immigrant insurance cooperatives, Elks and Masons. Major shortcoming: Relies on episodic voluntary action, whereas government officials and business executives have reliable funding and institutionalized authority. (Paul Hirst, Associative Democracy, University of Massachusetts, 1994) Workplace Democracy: Workers at all levels share influence; workers hire managers instead of vice versa. Conspicuous successes: Workers cooperatives in Mondragon, Spain. Major shortcomings: Existing management blocks the way; most employees cannot conceptualize the possibilities, and lack requisite training; consumers exercise a kind of tyranny. (Robert Dahl, A Preface to Economic Democracy, California, 1986) Deliberative Democracy: Proposed reforms to expand public deliberation so as to enhance the quality of policy outcomes in representative democracies. An example is Deliberation Day, a proposed new U.S. holiday on which millions of people would engage in structured debates about issues that divide the candidates in an upcoming presidential election. Major shortcoming: Feasible reforms are tepid, exciting reforms are infeasible for foreseeable future. (Bruce Ackerman and James Fishkin, Deliberation Day, Yale, 2005)


by a political system designed before your great grandparents were born?? Of course there have been changes over that period such as creation of the Environmental Protection Agency, Homeland Security, and Sallie Mae (the student loan agency) but the basic form of government written into the U.S. Constitution in 1787 has not changed. Ought it to be? In what ways? And what changes do other state, national, regional (EU), and international constititutions and other governing arrangements need? Given how little discussion there is of the possible alternatives, how could anyone have an adequately informed, thoughtfully considered opinion on the subject? And if such a fundamental issue is surrounded by ignorance, what does that say more generally about the care and sophistication with which political innovation is being undertaken? What are the chances of staying in step with a fast-moving technosphere if citizens and elected officials cannot even conceptualize what a truly modern government might look like?

Proposals That Might Improve Democracy

Many people believe that the lure of re-election assures that legislators will work hard to solve public problems. That is a grave error, based on not having thought about the realities of political life. Reducing or solving public problems often requires hard choices that make some constituents very upset; so it is much easier for those seeking re-election to put the unpleasantness off year after year and even decade after decade. Raising taxes on the wealthy is a budget balancing move that might be widely applauded, but not by most of the wealthy (other than Warren Buffet and a few other do-gooders who recognize that present tax rates on the rich are the lowest in nearly a century). Making hard choices could mean prohibiting builders and city councils from allowing new construction in flood plains (damages from which get paid out of federal flood funds, thereby raising taxes for everyone whose home is not subject to routine flooding). Perhaps hard choices would mean shutting down armaments manufacturers or Army bases or NASA facilities or Veterans


Association Hospitals or other lucrative plums that are highly esteemed by those who make a living at taxpayer expense. Perhaps limits would be set on budgets of the National Institutes of Health, so as to slow down the rate at which new high-tech diagnostic tests can be introduced -- most of which drive up the cost of medical care. I am not attempting to convince you of anything about these or other substantive public policy issues; these ideas merely suggest the possible desirability of modernizing government to induce elected officials to make harder choices, sooner, as often as public needs and technological pace require. Each of the illustrative actions mentioned above has complex positives and negatives, so there cannot be such a thing as a correct decision. But there certainly can be majority decisions, taken after appropriate study and deliberation, to do what is considered a lesser evil than maintaining present lazy trajectories. One of the cheapest and most effective ways to get another person to do something is praise. It can be even more effective than money. In groups, organizations, and larger collectivities, the equivalent of praise is the granting of high status. A person with status gets attention and respect, her word carries more weight, he is asked to serve in prestigious roles. As with any other incentive, praise and status can be overdone and thereby can lose its effectiveness. Some of your generation grew up entirely too used to receiving gold stars for any little educational achievement; and many film and athletic personalities, weary of the fawning fans, begin to act irresponsibly instead of using their positions for a reasonable combination of personal satisfaction and public service. Still, the world does not stop using grades, money, and other inducements just because these reward systems have seemingly inescapable negative side effects. Why do most of us not think of using praise or status more systematically as a way to shape the behaviors of people whose actions matter to us? In principle, status could be doled out in ways that grant very high esteem to


legislators who score highest in social problem solving. At present, politicians as a group are held in a certain disdain, lumped together as if they all were crooks or only interested in power. That short-cut reduces the knowledge burden on citizens, journalists, and others who would otherwise have to actually learn about the vast differences among elected officials' knowledge, skill, and diligence. But the lazy approach deprives the public of a potentially influential lever by which to encourage the cadre of legislators to strive for our approval. Anyone who pays close attention to the German Bundestag, the Japanese Diet, or the U.S. House of Representatives knows quite a bit about who the sharpest and/or most effective politicians are. Why not organize such observations far more systematically; why not diffuse them much more broadly so that a great many citizens can begin perceiving more accurately; and why not use the rankings to give and withhold what politicians want? For example, especially knowledgeable and effective government problem solvers could be granted additional authority, which most politicians desire partly because it helps them achieve more of their goals, and partly just because it feels good. Chairs of committees and subcommittees in the U.S. Congress now are doled out primarily on the basis of seniority who has been there the longest. The same goes for being put on committees perceived as most important, such as the Appropriations committees that make decisions about spending. Instead of longevity, suppose that most authority went to the best and the brightest? Elected officials then would have a strong incentive to perform well in their public-serving tasks in order to obtain what is in their own self interest: greater authority. Grand solution, perhaps. But who is to judge the ones who are most deserving, and on what basis? Fortunately, there already is considerable experience with performance-based rankings. Just as the American Bar Association rates potential judges before they are nominated or confirmed, so could not-for-profit organizations rate legislators. In fact, some environmental organizations already have a dirty dozen list of those who


have opposed environmental legislation; citizens in favor of lower taxes sometimes rank legislators in terms of their votes on spending and taxation; and "good government" organizations keep track of how many votes are skipped by each Senator and Representative; and so on. If tasked to do so, the U.S. Government Accountability Office could compile a far more complete and sophisticated taxonomy and ranking, could post it online, and could allow many different organizations to use the data to rank legislators according to a great many different criteria. Of course there would be many problems in doing any ranking, even when those performing the ranking are sincere and conscientious. For example, a ranking by the Sierra Club probably would overrate legislators who emphasize the Clubs concerns such as wolf habitat, and probably would underrate legislators who focus on matters that Sierra does not prioritize, such as green chemicals. On the other hand, this would not much affect the Club's ranking of legislators perceived as anti-environment, who presumably would cluster in the lower half of the scale. Similar problems would confront rankings by Tea Party or other conservative organizations. Rankings between and among organizations would be even more problematic. For example, how to rate legislators on gun control legislation would hardly be agreed upon by the National Rifle Association and by those who do not see why anyone needs an assault rifle. Thoughtful, caring, knowledgeable legislators could dissent with each other on whether to subsidize state universities at taxpayers' expense versus scaling down public universities, requiring students and their families to cover more of the expense of a college education. In these and other areas of public policy, there is no such thing as the one correct outlook. Moreover, how to integrate the diverse rankings by many different organizations would obviously be difficult. Such meldings are not impossible, however. U.S. News and World Report annually picks the top 50 colleges out of 3000 in the country (Rensselaer's ranking tends to be in the


40s.) Likewise, Investors Business Daily each week chooses the top 50 out of nearly 5000 U.S. businesses by merging a dozen different criteria pertaining to sales growth, earnings, and other not-fully-commensurable indices. (The top companies are considered to be the best investments.) Is it a perfect list?: Of course not, for an otherwise promising gold mining company could suffer disastrous flooding of a major mine next week; Uggs boots could go out of fashion and drive down Deckers stock; Netflix executives could (did) make a stupid mistake of trying to split the business between DVDs and streaming video -- leading to a $2 billion drop in the company's value. Equivalent disasters could occur to top-ranked politicians. So the point is not that rankings of past behavior are always reliable guides to the future, merely that some ranking is usually better than none, and that there always are ways to integrate seemingly disparate considerations. Analytically sophisticated and diverse rankings of politicians, however they were put together, would cause legislators at the very least to know that their every move was being surveilled. Journalists could have a field day publishing exposs of politicians who rate especially well or especially poorly; and journalists surely would concoct new stories by interviewing spokespersons advocating that their way of ranking a given legislator or set of them deserves greater weight and attention. Some of it would be hokum; however, it is difficult to imagine that the resulting news coverage of legislation and legislators would not constitute at least an order-of-magnitude improvement over what now passes for news. Taking the rankings even more seriously, they might be used to prohibit low-ranking officials from running for re-election. Ouch, that would really hurt; old-timers ready to retire might not mind, but younger pols just getting started would do just about anything to stay eligible for re-election. Whether the cutoff should be the lowest five percent or the lowest twenty percent, I think only experience could reveal. Hypothetically, though, imagine that the worst five percent were lopped off every two years for the House of Representatives which has two-year terms, then lowest ten percent were


denied re-nomination for positions with four-year terms (some state legislative offices), and the lowest fifteen percent were denied re-nomination for Senate seats inasmuch as Senators serve for six years. How many electoral cycles would you guess it would take before smarter, harder-working, more cooperative problem solvers came to dominate elected office?

Competence and Representativeness

In addition to rankings proposed above, suppose that elected officials as part of their employment contract had to actually learn something about the matters over which they would be deciding?! How could those responsible for criminal justice policy make sensible choices without visiting dangerous and degrading jails and prisons, without talking with police chiefs and beat cops who have to enforce the unenforceable drug laws, without ongoing study courses with criminologists, without trips to other countries to see how their governments handle similar issues? How could members of the House Science Committee make intelligent judgments about how the National Science Foundation is directing billions of dollars without considerable understanding of what the organization is doing? Who would certify that a legislator had studied enough? What would happen to those who did not? Clearly there would need to be special new institutions to carry out such plans, and each innovation would raise new problems susceptibility to bribery, partisan manipulation, and simple bureaucratic inertia. To take a different tack and to put it in the form of a rhetorical question: Do you really want 21st-century elected officials to consist primarily of attorneys, or would you prefer to have a decent percentage of scientists and engineers chosen from among those who actually understand something about the technoscientific matters that bear on contemporary governmental decisions? What about representation of plumbers, electricians, small business owners, teachers, nurses?: the list of possibilities goes on and on. One can imagine ways to broaden the cohort of those seeking election as


has occurred to some extent in recent decades for women, Hispanics, and, to a lesser degree, African Americans. Ordinary social changes may continue gradually to chip away at the dominance of white males in government, but I want to raise for your consideration a more mathematically sophisticated possibility: random selection. Developed nearly a century ago by Pearson and other statisticians, random selection of, say, the 435 members of the U.S. House of Representatives could achieve a better match between legislators occupations and the publics occupations than can be achieved in any other way I can conceive. More importantly, random selection would achieve a near-perfect (+/- 5%) match on every other variable that matters. Instead of vaguely gesturing toward representation that anyone with open eyes knows is partly rigged toward those with money, looks, and other favorable qualities, why not consider bringing political representation up to date? It might actually come close to what was navely envisioned by some of the early enthusiasts who spoke so glowingly about the wonders of democracy. Maybe they were not nave, after all, just ahead of their time? Some of you immediately will demur, thinking that unqualified people will be put into high office. Partly that stems from your over-estimate of present office holders abilities, but there also is merit in the concern. But there are work-arounds. For example, stratified random sampling would select from among those who had demonstrated sufficient competence in world affairs, economics, criminal justice, biomedicine, and/or any other fields you believe ought to be included. I doubt that many people would want only geniuses in Congress who could pass tests on everything. So you would need to consider just how much competence is enough, and in which domains. But testing is something that the Educational Testing Service (SAT, GRE, MCAT) knows how to do. And of course there are other ways of assessing competence including, say, whether a person has served in the Peace Corps or otherwise displays any evidence of understanding that there is a world beyond the borders of the U.S. The U.S. system is probably the world's worst in terms of


domination by attorneys, but no nation I know about has adequate mechanisms for achieving genuine representativeness and for elevating only competent people to high office. Once enough people have agreed upon those goals, then we can get down to arguing, planning, and experimenting with techniques to achieve the aims. Pay Another proposal, perhaps more outrageous than the foregoing: Why not pay office holders to run the country well? Yes, members of Congress already make more money than I do; but then they have to maintain homes in Washington, DC as well as in Butte, Montana or wherever. And many of them could make a lot more money in law or business. But the main ways they can increase their base salary are ones that are undesirable for the rest of us: Outright bribery, misallocation of campaign funds, favors from wealthy people (such as trips on yachts), speeches to the National Association of Business or equivalent groups willing to pay honoraria of $50,000 or so (= disguised bribe), or continue working part-time in their law practices. All these activities DETRACT from actually serving as a representative of the public doing the publics business. An obvious way to get legislators attention is directly analogous to the proposal made in a previous chapter for CEOs of business corporations: Let them earn substantial incomes by solving or ameliorating public problems. Legislators of course do not directly hire and fire, pollute or clean up, invest in the U.S. or build plants overseas, or otherwise do the kinds of things that business executives do. But legislators do a lot of damage anyway, and a lot of good. Id love to shift the ratio toward the good, and I can think of no more effective inducement than money. Air pollution goes down, your bonus goes up. Bridge and highway maintenance moves from the D grade now given by the Civil Engineering Association to a C or even A, your bonus goes up. The number of people without medical care declines, bonus goes up. The average student loan debt accrued per graduate goes down, legislative


bonuses go up. Senators and Representatives now earn about $175,000 per year, a bit more for the Speaker of the House and others in leadership roles. Suppose for the sake of discussion that they were allowed to earn ten times as much for maximum performance. That would be unfair to many other people who work just as hard and are just as deserving, and in a good world I would not suggest such high pay for anyone. But the top elected officials in the U.S. are among the most important decision makers in the world; and as the economy now operates, people holding key roles tend to make a lot of money. And fairness is only one of several criteria that should be applied to this case. More important, in my mind, is whether the 435 members of the House and the 100 Senators, (and the equivalent in the French Chamber of Deputies, German Bundestag, and so forth) actually govern the planet well. If each of the American national legislators earned $2 million per year, let us say, that would total just over $1 billion. Do you know how tiny a fraction that is of the budget that they are responsible for overseeing? Good decision makers could easily save ten or a hundred times that amount if they were serious about removing superfluous U.S. troops now stationed in Japan, or if they learned how to deftly cut the bloated budget of the National Institutes of Health (which has helped nearly triple the percentage of Gross Domestic Product spent on medicine an expenditure now nearing $1 trillion annually). Good governance is cheap compared with mediocre governance. Again, many of the same difficulties would arise as those discussed above with respect to denying re-nomination, and as discussed in a previous chapter with regard to paying corporate CEOs. Another complication, not previously mentioned, is that some government policies take many years to come to fruition -- for good or ill -- so that part of any pay-for-performance package would have to be held in escrow and doled out over the years. There also would be complications if legislators could be charged fines when their work causes more harm than good. And who decides all this? Still, when it comes to the key people governing your planet, would you prefer a somewhat


awkward, flawed, controversial, and occasionally even abused incentive system, or would you prefer no incentive system at all?

The ideas for political innovation proposed in this chapter are a subset of the totality of ideas that could emerge if enough people thought long and hard, had funds to experiment and evaluate, and had the time to gradually learn via trial and error. But such a project has to start somewhere, and it is in that spirit that I have offered the ideas so briefly introduced above. If this were a longer treatise in political science, it would need to also include presidency, Supreme Court, the internal organization of Congress (e.g., committees and their staffs), Department of Housing and other cabinet-level departments, regulatory agencies such as Food and Drug Administration, and many other facet of government. And it should deal with dysfunctionalities in Japan, authoritarianism in China, representation of citizens at the EU headquarters in Brussels, Belgium, and a revised structure for the United Nations. But in the U.S. system, it is Congress that is the biggest problem, so that is where I would start. In other systems, the general point would remain the same, but specifics would differ. Even if I had complete authority, I would not jump to implement my own proposals. Some are probably wrong-headed, and all need refinement. For that, I would seek much wider counsel, and I would look for ways to gradually experiment and phase in reforms over a generation or two. But I certainly would not sit around with the outdated, semi-democratic, poorly incentivized, inadequately trained, oddly chosen, and otherwise deficient Congress inherited from several centuries ago. And you?: Do you believe that rapid technoscientific change can be governed adequately by the political system now existing?



I realize that the possibilities for political reform already discussed have been bold and unfamiliar -- and, in some readers minds, impossible and perhaps even crazy. However, I have saved the most outrageous ideas for last, and, oddly enough, I think some of you will like one of them best! Whereas all the suggestions in chapter nine constituted reforms of the existing political system, my final proposals could be the basis of a truly revolutionary late 21st- or early-22nd-century way of governing.

Internet-Based Democracy? I have borrowed and adapted the first proposal from Nate Fisk, who recently earned a PhD in STS from Rensselaer. When young people look at the existing system of voting and government in the U.S. (and most nations), I would expect them to think, "Look how backward all this stuff is. Where am I, Valley Forge with George Washington?!" As Nate once asked, "What if we set aside all the meat-space campaigning and meetings, and moved everything onto the web?" The first time I heard this in a graduate seminar, I looked at the speaker with a glance that said something to the effect, "Hah, hah, you've had your fun, now let's get back to discussing the book assigned for this week." But I gradually became intrigued with the notion. I still do not think that web-based politics should entirely displace people gathering in Washington, Paris, London, Tokyo, and especially the United Nations in New York City. But I do see some real advantages in moving politics into a broader electronic sphere. The plan remains rudimentary, and some of you may


actually have something to add to it or otherwise improve it. There are four related components:
1. Open up political decision making to millions of people instead of the few tens of thousands who now dominate government in major nations. 2. Monitor more directly what government, business, and science are doing, by enlisting these millions of volunteer monitors. 3. Create online forums where "representatives" (see below for explanation) can deliberate and reach decisions about important issues. 4. Gradually cede greater authority to the online governance system, by requiring that the present face-to-face system take the online decisions into increasing account.

First, consider the possibilities raised by adding millions of new political participants. Have you heard anyone (maybe yourself) complain that politics is dominated by insiders, by persons who either are wealthy themselves or who can appeal to wealthy campaign donors? Have you heard anyone complain that it is mostly old people who are making the decisions in Washington? Have you thought or heard that some issues you would like to see addressed just never seem to make it onto the action agenda? If you are a bicyclist, for example, don't you wonder why in most of Troy one puts life at risk in riding on streets designed for cars and trucks (even if a phony bike lane is painted on the street)? Closely connected to the question of whether the relative handful of insiders is doing a good job is the issue of how representative they are of the general public. "Government of the people, by the people, and for the people?" I do not see that coming close to actually happening, do you? Whatever happened to that "By the people" stuff? You show me anyone who thinks that s/he is well represented in Washington, DC, and I'll shake her hand and tell her she's very lucky because most citizens do not feel that way. That is to some extent due to unrealistic hopes for democracy, and to wanting to get ones way instead of having to compromise (or be outvoted). But the dissatisfaction also is due to the fact that many Senators and


Representatives actually are not very representative. And the same basic problem exists in most other nations. Some representatives truly do try to represent, of course. But guess how many national elected officials in the U.S. had income at or below the median prior to their election? Close to, if not exactly zero -- and a majority were in the top five percent. That means a majority in the Senate and House are millionaires! Exactly how excited do you think affluent representatives are likely to be about the plight of the non-affluent? What fraction of elected officials are women? Hispanics? African-Americans? Gays and lesbians? Under forty years of age? Persons who have ever been jailed, rightly or wrongly (about ten million such people in the U.S., disproportionately minorities)? Of course there are exceptions thoughtful, caring, knowledgeable legislators who go out of their way, and run electoral risks to champion the causes of those who have few others speaking for their needs. But remember the cardinal rule of social science, that our thoughts and behaviors are shaped substantially by our circumstances; given that, it is fair to assume that the U.S. electoral system is systematically sending to Washington a cadre of white, highly educated, older, affluent male attorneys who do not and cannot effectively represent a majority of the American public. Some other nations are better, some worse; none has terrific representation. This dirty little secret actually is in plain sight for all to see, but few apparently perceive it -- or at least discuss what should be done to change it. Electronic democracy could open that up, allowing a million or more citizens to begin participating. Eventually, in principle, every single person on the planet could play some role in some kind of public decision making. One question that would immediately arise concerns how to define an issue, and how many issues are to be considered? Do 5000 different issues deserve discussion in a given year, or 50,000, or 500? I expect the intermediate number is closer than the others, but only experience could determine how many online discussions would be appropriate. And it might


well be desirable to have multiple discussions simultaneously on important topics, to guard against capture by unrepresentative enthusiasts or critics. Whatever the number, the discussions would have to be mediated by trained moderators. Simply getting more people saying more things quickly becomes overwhelming, as demonstrated by the proliferation of blogs, YouTube videos, and the plethora of other material on the web. The best are as good as one finds in The New York Times, the worst are hate-spewing trash. The great shortcoming is that many of the conversations, video uploads, and so forth are not moderated. However, there are some very good models that could be drawn upon; for example, the serious section of Reddit sorts materials based on what users find worth viewing. At Wikipedia a board of editors examines each entry, pointing out weaknesses that need to be addressed before an item can move from provisional to full status. I suspect Reddit is not quite the right model because its ranking of items is based too much on popularity; and Wikipedia is much more fixed in place than a political discussion could or should be. But the basic ideas make sense, and any new online representation and governance system requires considerable structure in order not to become a version of the Wild West. Could the world find or train 50,000 or more people capable of writing/speaking good English, Spanish, Mandarin, Arabic, or another supported language who would be willing to volunteer some of their time to serve as discussion leaders-moderators for online discussion in an area of their interest? My guess is that people would be lining up if they came to believe that the new endeavor actually meant something. If necessary, however, there are ways to sweeten the deal: Publish their names, rank the moderators and give awards to the best, have levels and allow those who have proven themselves at a lower level to move to a higher position, including some positions that would become full-time, paid occupations. What authority would a moderator have? One possibility would be to have gradations of participants. Just as Amazon has "verified purchaser" and "top 500 reviewer," so the Online Representation System could initially start out


with the burden of proof on the contributor: Your writing or speaking does not go out live, rather it must first be screened. Once you have been certified as not a crazy, the next question is whether you actually can stay on a topic, or whether you tend to go off on tangents. That requires a different sort of moderation, but moderation nonetheless. The great problem with comments on blogs and other online spaces is that there is rarely anything cumulative: New comments either refer back to the original writer without taking notice of any evolution in the discussion, or the opposite: New comments criticize or focus on issues brought up by later commentators, and the thread started by the original speaker is lost. A moderator whose job was to prevent that from happening obviously would have to spend considerable time and effort; but, equally obviously, scads of people are spending dozens of hours monthly on the web -- and a great many people long to do something constructive but have no realistic outlet. I am imagining an outer circle of participants that anyone can join, but the ones who actually become decision makers have to earn their way into that position by some combination of astute comments, ability to listen and keep a conversation on track, and subject matter knowledge. The knowledge could be picked up to some extent along the way, almost effortlessly, if, say, the subject dealt with tires that corner well under wet conditions. But at some point, genuine expertise is implicated. If Michelin engineers claim that their Hydroedge tire is brilliantly designed against hydroplaning, but that it gives up snow traction in exchange, and they are operating at the design forefront with no way to get better snow traction without giving up resistance to hydroplaning, someone has to figure out whether they are right. Aside from test evidence from Consumer Reports, Car and Driver, and other neutral testing organizations, I imagine there would be a way to get Continental Tire, Hanooka, Bridgestone, and other tire manufacturers to bring expertise to bear attempting to debunk Michelin's claims. They all in effect try to do some of that via ordinary commercials, but in that format they don't have to go head-to-head with each other's experts in front of an attentive, skeptical, and somewhat sophisticated audience.


How many people would comprise the inner circle of decision makers? Suppose fewer or more wanted to volunteer, and were qualified to do so? And many additional questions would have to be addressed for Internet Democracy to become sufficiently well defined that it could be a subject for serious consideration. But the ordinary democracy most of us now take for granted once was a radical idea that seemed impossible to implement.

An Alternative Approach, and Some Arithmetic The system described above would be largely self-organizing, with whoever wanted to participate joining and working their way up in credibility, "talking time," and influence. That has some significant advantages: (1) Anyone can begin doing it as a minor extension of discussion forums already in wide existence; and (2) It does not require approval of government officials who have a vested interest in maintaining their own authority and accompanying perquisites. But Internet Democracy, the informal system of "democracy via discussion," has one huge weakness: No real authority, at least at first. That means many people are going to be less interested in participating. It means no access to public funding for training and paying discussion moderators. It means that even the most thoughtful inquiries and discussions might not actually lead to significant improvements in technological steering. I therefore believe that it is worth considering how Nate's notion of bottom-up Internet Democracy could be made more compatible with electoral politics and government of the face-to-face sort that has been prevailing for the past few centuries. When I do thought experiments -thinking of things that have never previously existed -- I ask myself, "What would be ideal?" In the end, when it comes to complex actions in conjunction with others, many tradeoffs and compromises are an inevitable part of life. But in creating a design vision, it makes no sense to be tepid: Why not try to envision what one really wants? What do you want? All that follows is largely a thought exercise for you. To model that, I will use my own goals -- but they are no substitute for yours.


I find it convenient to start with a simple analogy to economic life, where we each choose our own shirts and so forth. If economic life worked well, to my mind, everyone would have enough to make a reasonable array of choices adding up to a lifestyle. No one would impose much of anything on anyone else. What I want is that everyone gets enough of what they need, and that they get to exercise considerable choice in the matter. I do not want myself or others told what career to pursue, what to eat or wear, what music to listen to, what books to read. In economic life, the world obviously has not yet achieved that ideal of free and ample choice, but the middle classes in affluent nations are asymptotically approaching the ideal. Those of us with plenty are not doing well at extending what we have to others less fortunate; and the civilization is not well structured to assure that personal choices are compatible with environmental limits and other public needs. But at least there is a widely shared, relatively clear ideal for economic life -- freedom of choice among a plentitude of options. What is the equivalent ideal for political life? ....uh....uh..... There isn't one, is there? Why not? Perhaps it is because many people implicitly are thinking and acting as if they were living generations or even centuries ago, back when democracy was a younger, less widespread, and less well proven experiment. There were somewhat realistic fears that democracies could have been trodden under by 20th-century dictators, or aborted even earlier by the kings and nobles against whom some of our ancestors rebelled. What about you: Are you looking more backward or more forward? Tied to the status quo or ready to think anew? My own ideal for political life is roughly equivalent to the economic ideal sketched above: I want a planet that provides mechanisms for every person to be represented fairly in every important public decision. Such an aspiration is not feasible in my lifetime, of course, but who knows what might become feasible over the next century? There would be a huge number of difficult challenges, most of which would have to be worked out via experiments and learning from experience. But the biggest obstacle, I suspect, is that most people simply do not believe any such heroic endeavor could ever be realistic. I think it is eminently realistic, in principle, and in


the remainder of the chapter I want to demonstrate that the arithmetic for real democracy works a lot more easily than you might initially imagine. Then you will have a better basis for deciding what kind of future-oriented system of government and politics you want to stand for. Assuming 5 billion adult humans, how might they organize a system to represent themselves fairly? It turns out that it can be done in just eight steps:
Level 1: Constituent groups of ten neighbors or friends. Five billion total persons would produce 500 million lowest-level neighborhood or affinity groups. Level 2: Each level-one group selects one of their number to represent them in a group of ten representatives at level 2. Thus each level-two group would represent a total of 100 people. To remain in office the ten members must maintain the support (of a majority?) of their friends and neighbors at level one. (There would be 50 million second-level groups.) Level 3: Each level-two group sends one of their number to represent them at level 3. Again, it would be easy to replace errant reps, because they report to just nine others at level 2. (Five million third-level groups.) Level 4: I will not keep repeating, but the process continues the same way at levels 4 through 7. There always are just ten members in a given group, so they can always keep watch on the representative they send higher. (500,000 fourth-level groups.) Level 5: There would be a total of 500,000 representatives serving at level 5, meaning 50,000 groups. Level 6: There would be a total of 50,000 representatives serving at level 6, meaning 5000 groups. Level 7: There would be a total of 5000 representatives serving at level 7, meaning 500 groups. That is the same order of magnitude as the number of legislatures now existing on Earth.


Level 8: Each level seven group chooses one of the 500 Top Global Elected Officials.

At what point do the participants and representatives start talking about substantive issues rather than merely choosing higher-level representatives? At any point they wish! At the lower levels, the issues discussed presumably will be potholes in the roads or other very local issues. As representatives take their subgroup issues to higher levels, however, even seemingly trivial issues may get aggregated into, say, a change in the level or allocation of gasoline taxes in order to funnel more money into maintenance of roads and bridges. And some local issues raise poignant international concerns. If citizens of the present Republic of Vanuatu in the South Pacific were convening in small groups this week, for example, they might well be worried about climate change leading to destruction by flood of their island nation. And it is possible that higher levels of the hypothetical new governance system might be more moved by their plight than are the present governments of major political units such as China, the U.S., and the EU. The proposed system would draw essentially everyone into politics, albeit at a very low level. And there would be a direct transmission line from every person on the planet to the top levels of government. Of course that is not to say that the 500 at level 8 or the 5000 at level 7 (presumably where most important decisions would be taken) would attend to every issue, or would resolve matters to everyone's satisfaction. For a complex civilization is bound to have serious disagreements and shortcomings as priorities are set, as compromises and tradeoffs are worked out. However, each representative all the way up the line would be accountable to just nine other people, so there would be no escaping surveillance and accountability. That would be a far cry from present arrangements, wouldn't it? As outlined above, there is direct contact only between adjoining levels. But I think many people would like to know more about the higher-ups. Can we figure something out to make that feasible? Personal knowledge of a


representative would be difficult when jumping three levels, but not impossible. For sake of illustration, here is how fourth-level representatives (each representing 10,000 citizens from level one) might maintain contact with all of them. If each Level 4 representative reported once per month to audiences of 400 constituents, about every two years every person on Earth could be in the same room with a fourth-level representative. Written and/or audiovisual reports could be circulated in advance, the representative could start with an oral presentation, and then there might be several hours of question-and-answer. Emerging media make possible many other kinds of contacts, reports, and recorded glimpses of each representative's activities and views. (And it need not be a one-way street: lower-level constituents can prepare reports, send complaints, pose questions, or otherwise initiate interactions with higher-level representatives.) The level one meetings might end with a vote of confidence or no confidence in the Level 4 representative. For sake of comparison: Each member of the U.S. House of Representatives presently has nearly 3/4 million constituents. Some of these are children, and the percentage varies among House districts; but the system I am proposing with 10,000 citizens per Level 4 representative would be something like 60 times more representative than the existing U.S. system at this level. Each U.S. Senator, on average, has more than three million constituents -- with those from high-population states such as California "representing" upwards of 20 million. The highest level of my system has half that number of constituents per representative, yet encompasses the globe. If something like the one-in-ten system of representation were to be seriously considered, one issue quickly arising would be how to phase it in. Should it operate in parallel with existing electoral-governmental systems for a time -- a practice period of, say, a generation? Might real authority gradually be shared between current governments and the new system? Ought the new system to deal exclusively with technosocial issues, or would it cover all arenas of government? Should it only be for global affairs, or would it also eventually take the place of local, regional, and


national systems of election and government? All worthy questions, and there would be additional ones. So this is an idea, not a plan. Again, my purpose in this arithmetic envisioning exercise is to help you see and challenge your own present assumptions; where you go from there is your call.

Conclusion The ideas outlined in this chapter clearly are outrageous when viewed against the perspective of the present way of doing things. But present institutions are inherited from the past inherited, too often, with inadequate reconsideration of what is needed in a new era. It therefore can only be expected that forward-looking ideas would be scorned, precisely because they orient toward the future rather than toward the past. In other words, one cannot trust ones own reactions to new ideas. If one were truly ready to innovate politically, she would be forever bumping into outmoded ways of thinking and acting. He would go around asking, Why not? Instead, the more common response to new ideas is something like, Why bother?, or It cannot work. Our very skepticisms thus can be interpreted as evidence of the legacy thinking, habit, and somnambulism that keeps existing systems (mal)functioning. Of course that does not mean that every repellant new idea is a good one. Many ideas are not very good, and deserve to be rejected or modified beyond recognition. In fact, most scientific experiments fail, most mutations die out, most new products do not last very long because not enough people purchase them. So anyone trying to grapple seriously with new ideas about politics is bound to be somewhat confused, uncertain: How to judge whether one is simply being backward looking, or whether one is being prudent and exercising justified caution? I am sorry to say that there is no formula for figuring that out. The best a thoughtful person can do is to give new ideas a fair hearing. And the best


that a design-oriented person can do is to ask how existing or proposed designs might be improved.

Chapter 11. Engineering and Overconsumption: Confronting Endless Variety and Unlimited Quantity


many units of stuff are being designed, produced, advertised, sold, and eventually discarded, according to critics who refer to this as overconsumption. U.S. consumers use more per capita than people living anywhere else on the planet; Americans average 125 pounds daily of material, totaling more than 20 tons apiece each year. By one estimate, discarded and emitted annually in the U.S. are: * 3.5 billion pounds of carpet sent to landfills; * 25 billion pounds of carbon dioxide; * 6 billion pounds of polystyrene; * 28 billion pounds of food;


* 300 billion pounds of organic and inorganic chemicals used for manufacturing and processing; and * 700 billion pounds of hazardous waste generated by chemical production. 95 percent of these amounts occur before a product ever gets into the household, because the great bulk of consumer culture is hidden from sight -mine tailings, cattle feed, manufacturing facilities in China, coal ash from power plants, and other materials that most people do not see. The volumes are astounding and the efficiency is low: For every hundred pounds of actual product that engineers and others create, another 3200 pounds of waste are entailed. In the course of a decade, 500 trillion pounds of molecules are transformed into nonproductive solids, liquids, and gases. Responses to these facts differ. Call it entropy, and shrug. Perceive it as representing an affluent way of life. Join economist Juliet Schor in down-shifting to a lifestyle less geared toward wanting and getting. Side with Paul Hawken and other advocates for natural capitalism or Industrial Ecology who believe that clean production and environmentally friendly redesign can cure the problems while contributing to business profitability. These and other stances all are inherently partisan -- in the sense that anyone using the term overconsumption has at least an implicit view of what constitutes "too much. And anyone who denies that there is overconsumption likewise must have some standard by which to determine it. Therefore, no standard for "appropriate consumption can be uncontroversial, and discussions of the subject tend to become entwined with general ideologies about contemporary life. Thus, David Orr, director of environmental studies at Oberlin College, believes The emergence of the consumer society...resulted from...a body of ideas saying that the earth is ours for the taking: the rise of modern capitalism; technological cleverness; and the extraordinary bounty of North America, where the model of mass consumption first took root. More directly, our consumptive behavior is the result of seductive advertising,


entrapment by easy credit, prices that do not tell the truth about the full costs of what we consume, the breakdown of community, a disregard for the future, political corruption, and the atrophy of alternative means by which we might provision ourselves. Some readers will take issue with the extremity of Orr's claims, but his criticisms offer a useful springboard to think about how engineers and others contribute to high levels of technologically mediated production and consumption. Orr also offers an interesting test of legacy thinking: Did you take offense at any of his words in the quoted paragraph above? That would not mean you are wrong and Orr is right; but it might be a clue to a piece of your mind that does not want to reconsider your present assumptions and attachments to the comforts of a high-consumption lifestyle. Possible Indicators of Overconsumption There clearly are a great many people in the world who have too little rather than too much, so why refer to economic growth as overconsumption? One answer has already been given: extraordinary inefficiencies. Contemporary rates of resource usage also appear to be unsustainable, especially if a world population of eight to ten billion aspires to live at U.S. levels. There is room for dispute because, for example, prices of many minerals have fallen rather than risen as extractive techniques have improved, and substitutes can be developed for some industrial purposes. For petroleum, however, the U.S. Geological Survey estimates that 75 percent of the world's conventional petroleum reserves and 66 percent of natural gas reserves have already been discovered, and the agency estimates that more than half the world's total supply of oil will have been used within the next decade or two. They could be proven incorrect, but even relatively small perturbations in supply and price lead to outsized moves in stock markets, business activity, car sales, and even military action. So it might be foolish to blithely assume that the world's leading geologists are going to be


wrong. Overconsumption is indicated as well by the fact that enough pollution is being introduced into ecosystems to have destabilizing effects. A substantial majority of atmospheric, oceanographic, and other scientists now agree that climate change is occurring and that it is due at least partly to release of carbon dioxide, methane, and other greenhouse gases. And at any given level of technology and regulation, increased consumption leads to greater quantities of toxic substances released into air, water, and land. It may prove necessary to virtually reverse the course of the 20th-century chemical industry, phasing out synthetic organic chemicals made with chlorine. Human habitation also is encroaching on most of the planet, leaving less and less wilderness. One or more species are being wiped out daily, and some among the most threatened are orangutans, pandas, and other charismatic megafauna that many humans really value. Also of concern are the less tangible social and cultural problems associated with consumer society, such as the overwork, debt loads, and stress which characterize a nontrivial fraction of the American public. The "diseases termed "Affluenza and "luxury fever clearly are spreading throughout the world, led by Hollywood and by clever marketing. Ironically, far from making people happier, innovation in recent decades correlates with a decline of happiness in affluent countries worldwide. Participation in civic and other voluntary organizations has declined -- it is hard to find Girl Scout leaders -- partly because people are too busy and also because of stress, individualism, and fraying of the structure of local communities that once evoked participation.

Proliferating Variety and Other Roles Engineers Play If present consumption patterns are as problematic as the critics charge, if human health, environment, and global culture are at risk from the juggernaut of consumer society, then engineers who facilitate


technology-based consumption are making ethically charged public choices every day. They are just doing their jobs, of course, but in so doing are helping shape and misshape contemporary life. Engineering ethics thus comes into play daily, not just when whistle blowing or other unusual circumstances arise. Being a good engineer often means being a contributor to consumer society and its excesses. Many or most engineers would find it difficult to keep their jobs if they actively opposed overconsumption; indeed, they might well jeopardize their livelihoods if they merely refused to accelerate consumption. The formula for many workplaces is: Start selling something new, or figure out how to manufacture an existing product less expensively, package it more attractively, and sell more units of it. Technical professionals do such a good job of running the treadmill of production that one manufacturing engineer of my acquaintance has wondered about retitling his profession landfill supply. Engineers (and many others) promote overconsumption partly by diversifying the variety of goods and services produced and sold. Electronics, breakfast cereals, shampoos, and clothing are among the almost uncountable number of different products that manufacturing engineers help produce. Consider several categories of secondary and tertiary effects that stem from increasing variety. First, to stock a wider variety of items, big box WalMarts and other retail outlets of comparable size have emerged. Construction, maintenance, lighting, heating/cooling, land use, and other requirements have grown accordingly, assisted in crucial ways by architect-engineering firms and by many other types of engineers. Increasing variety also has led to a rapid increase in the number of different types of stores, such as specialty stores for electronic games or cell phones. Again, anyone involved in construction participates in this process, as do civil engineers responsible for road construction to the new


establishments, and as do the environmental engineers who enable "mitigation" of wetlands destroyed by commercial and retail projects. Third, the scale and number of stores bring greatly increased management and data processing tasks: point-of-sale scanning and printing, software for inventory control, and automatic teller machines help with certain tasks, and simultaneously become part of the consumption machine. They come to entail their own somewhat independent R&D trajectories as competitors vie for market share on the basis of continuous improvement -- or at least change. An indirect effect of the foregoing is that there are so many items in commerce that businesses have a hard time keeping spare parts on hand for every conceivable widget. Consumers do not know where to find parts they need, decreasing the likelihood they will seek to service and repair items that in principle could still have useful life remaining. As consumers come to expect that hair dryers and other artifacts are to be replaced rather than repaired, manufacturers are encouraged to put even less emphasis on serviceability. For many product lines, therefore, increasing variety and quantity has correlated with reduced durability -- adding to the environmental burden. Fifth, diversity of products increases the information burden on consumers, consumer watch-dogs, and government regulators. Whereas consumer reports in their early years could cover a high percentage of products on the market, it now is not unusual to have five or more years elapse between tests of big-ticket items like snow blowers. Many products go untested altogether. And it is quite common for models to have changed substantially before test results even can be published. The US Environmental Protection Agency operates a Premanufacture Notification program intended to keep excessively risky chemicals from being manufactured and distributed. But manufacturers overwhelm the process by proposing several thousand new chemicals annually, far too many for limited EPA staff adequately to analyze. Also worth considering as an instance of proliferating variety is the familiar


story of the chlorinated chemicals that have caused such damage in the second half of the twentieth century. The chlor-alkali process used to produce caustic soda for pulp and paper also produced free chlorine as a byproduct, so chemical executives and their engineering employees invented new products utilizing chlorine with essentially no study of the consequences. We now know that adding chlorine to an organic molecule often makes it more toxic, less biodegradable, and otherwise more dangerous. Worries about chlorinated compounds surfaced nearly half a century ago, yet neither the International Union of Pure and Applied Chemistry nor the American Institute of Chemical Engineers has mounted a serious, generic inquiry into the problem. Chemical companies have defended each chlorine-based product until confronted by massive evidence, and then have grudgingly retreated one compound at a time rather than helping lead global reconsideration of the matter. Nor have there been many dissident voices among university faculty or chemical professionals; recently, however, a small but meaningful number of advocates for Green Chemistry and Chemical Engineering have begun to speak out. Silence no doubt can be a form of unwilling acquiescence based on fear of workplace retaliation, but many engineering practitioners and educators seem blithely unaware of the relentlessly accumulating evidence calling overconsumption into question. They seem equally unaware that they play key roles in steering technological society. Major technological innovations are analogous to governmental legislation, in the sense that innovation establishes an enduring framework for everyday life. Technological choices help decide who gets what, how tradeoffs are made between present and future, and other inherently ethical matters. Engineers therefore can be thought of as non-elected representatives of the public, representatives who help "legislate concerning technology. Of course, none has the degree of authority that top elected officials wield, except perhaps when an engineer becomes CEO of a major company. What would it take for engineers to use their admittedly constrained authority more wisely?


Changes In Engineering Education?

One crucial step toward more socially conscious engineering practice would be for universities to foster greater awareness of the roles engineers play in proliferating variety, accelerating consumption, and governing technology more generally. Some of this can occur in humanities, social science, and management courses, but for engineering students to give credence to the matter engineering instructors probably would have to shoulder some of the task. As many observers have pointed out, engineering faculty tend to emphasize narrow technical competence at the expense of more general preparation for thoughtful professional practice. One way to interpret this state of affairs would be to say that the better the job that engineering educators do in training their students under the present curriculum, the better prepared are the graduates to contribute expertly to their employers goals -- and the goal of many businesses is to accelerate the treadmill of production and consumption. How might engineering educators begin to slow down that treadmill, if they so choose? Faculty could press for more frequent, deeper curricular revisions to teach environmental design in closer accord with the forefront of the field. Whereas the forefronts of chemistry and chemical engineering are moving toward biocatalysis, teaching remains focused disproportionately on stoichiometric synthesis. Whereas environmental compliance, avoidance of liability, and building public image are high on the agenda of most large chemical companies, many textbooks and curricula pay more attention to engineering economics than to environmentally insightful chemical engineering. And whereas R&D is central to improving both the economics and the environmental record of the chemical industry, undergraduate curricula teach a rather static sort of chemical engineering that under-prepares graduates for continuous innovation. A very different curricular revision would import Industrial Ecology centrally into the curriculum. In order to cut down on the resources used to


accomplish a given objective, industrial ecologists propose that engineers increase the `dwell time of materials in the economic system. Design for `X would come to include DFE (design for environment), design for durability/serviceability, and so forth. Neither government mandates nor professional norms presently encourage such a design approach, however, and most employers in most nations place substantial obstacles in the way of engineers who would try to go beyond what law and market competition require. Schools of engineering could attempt to counteract this by giving lifecycle analysis and other concepts from industrial ecology greater prominence in the curriculum. Such changes may be fairly easy, because clean production and chemical greening can fit fairly well with conventional engineering practice, especially where innovations promise to reduce costs. In fact, most schools are gradually adding more environmental material to the curriculum. Thus, the University of Daytons first-year design course, billed as a model of seamless integration of social and ethical dimensions into engineering education, has students work on improved design for appliances such as toasters and can openers. Some thereby learn about durability and energy efficiency, and about half the students end up saying that it is an ethical responsibility of designers to develop products that most efficiently utilize a diminishing supply of nonrenewable energy, particularly when technologies exist to achieve this end. Many other schools now are offering an elective called something like Industrial Ecology and Manufacturing, Clean Production, or Sustainability in Manufacturing. Depending on the instructor and the department's culture, course materials may include newspaper and TV coverage of climate change, "globalization and the role of the World Trade Organization, and may deal with enduring issues such as actual or perceived trade- offs between jobs and environment. On most campuses, however, there are only one or two such classes, and they tend to be electives taken by a minority of engineering students. Still, if dealing with the excesses of


consumer society only requires introducing considerations of environmental sustainability into engineering education, there is a good chance the changeover will be made. Many European universities already are ahead of the US in this regard, and there are enough signs of progress in the US to have a reasonable expectation of curricular greening over the next generation. That would be a great step forward, but my guess is that dealing satisfactorily with the overconsumption problem would require going well beyond environ- mental issues and user-centered, green design as these now are being defined. An additional step would be to press for more social design in the curriculum: the case of the plow for poor Mexican farmers is a well-known example, and Harvey Mudd is one of the programs where students design assistive technologies for less physically able people. The Dayton curriculum includes design of a water filtration system for use in a poor country, and is said more generally to aim at awakening students social, cultural, ethical, and environmental responsibilities. At Rensselaer Polytechnic Institute there is an inter-school major in Design, Innovation, and Society, typically a dual major between Science and Technology Studies and Mechanical Engineering. The program leadership is located in the School of Humanities, Arts, and Social Sciences, and the major blends social with technical considerations from the introductory courses to the capstone projects. Smith and Harvey Mudd have engineering programs with a social orientation. This approach might begin to interrupt the present process of engineering for overconsumption because there is an acknowledged general absence in engineering curricula of broad professional service themes, having a focus instead on technical content and design and problem-solving processes. The relative neglect of social content may be attributable to the traditional deference of engineering to the business sector; to the impossibility of combining genuine undergraduate education with professional training in a


four-year program; to self-selection of engineering undergraduates and faculty; and to a tendency to become preoccupied with problem-solving techniques. If one wanted to get closer to the heart of engineering's arguably co-dependent relationship with overconsumption, it might be desirable to probe whether some aspects of engineering design as now taught do not actually belong at an institution of higher learning. Weaponry R&D was reduced or eliminated on many campuses in the 1960s and 1970s. If circumstances had not pretty much forced an end to the Liquid Metal Fast Breeder Reactor, many observers might adjudge it socially inappropriate and therefore out of place in engineering teaching. And hardly anyone in our era would advocate designing new persistent insecticides such as DieldrinTM. So it is by no means the case that anything goes in engineering curricula: There always are adaptations to prevailing social mores, to funding arrangements, and to other judgments about appropriateness. Might it be time to reconsider some current teaching that goes too far in supporting the excesses of consumer society? This is a touchy matter, and I leave it to the reader to fill in possible examples from engineering. But consider an analogous issue from business curricula: Can they justify teaching classes on advertising that prepare students for careers devoted partly to playing on would-be purchasers envy, low self- esteem, and inchoate longings. When successful, as research shows that they often are, marketers practices constitute manipulation and sometimes border on thought control. I have grave doubts about whether such an aim is consistent with the spirit of free inquiry that supposedly characterizes higher education. A university curriculum committee might insist that marketing courses contain substantial material challenging existing marketing practices, and a similar expectation might be directed toward engineering curricula to partially counteract teaching that now risks fostering overconsumption and its attendant social ills. Because mechanical and manufacturing engineering are the principal


sub-fields involved in design and production of consumer products, it is these curricula that arguably deserve special attention. ABET, the organization that accredits engineering schools, mandates that students study contemporary global and corporate contexts including social, economic, legal, ethical, and environmental issues. But the accreditation process leaves discretion to each campus and department, and most do the absolute minimum maintaining equation-oriented engineering studies that do not confront problems of the sort raised in this chapter and throughout the book. Another issue in curricular redesign is that when ethical issues are mentioned at all in engineering texts and courses, the emphasis overwhelmingly is on right conduct by individual engineers. An important change in engineering ethics teaching would be to shift some of the focus away from whistle-blowing engineers such as Roger Boisjoly, toward broader social processes impinging on engineering practice. Focusing on micro-level transactions puts the field in opposition to long- standing traditions in the social sciences: economists do not primarily study consumers or business executives, but focus on the economy as a complex system; social psychologists think about situational determinants of individual behavior; and sociologists map tendencies in populations, and unpack the (il)logic of collective processes. Social science, in other words, is not primarily about learning how to better understand and advise individuals. Hence, engineering educators arguably need to pay greater attention to the "social design of overconsumption. There also is a practical reason for shifting to a more thoroughly social understanding of professional ethical practice: Most engineers are employees who will lose their jobs if they refuse to play their assigned roles in the treadmill of consumption. To empower them to redesign toward appropriate consumption would require business executives and customers to behave differently. This would entail changes in tax laws, government purchasing, R&D, cultural mores, technological momentum, population growth, maldistribution of income, and many other factors. I worry that even the currently fashionable emphasis on participatory or experience-based design


involving clients -- obviously laudable in many respects -- takes attention away from systematic consumption patterns, and the social causes thereof. Engineering educators would take an unrealistic stance in class by pretending that such constraints and tensions do not exist in the workplace; but perhaps educators take an equally inappropriate stance by assuming too readily that business practices should determine engineers behaviors. Might there be a way to teach best practices while acknowledging that contemporary institutions rarely practice them? Also needing attention are several issues connected closely with undergraduate engineering curricula: accreditation, professional licensing, and ongoing lifetime education. The tests and test-preparation materials in chemical engineering I have examined have radically under-responded to the emergence of green chemistry and green chemical engineering. According to my inquiries at the American Institute of Chemical Engineers, professional licensing is being overseen disproportionately by retirees who serve as volunteers rather than by paid staff and high-powered engineering educators at the cutting edge of their fields. If such deficiencies are widespread throughout many engineering subfields, it might require only a comparative handful of educators and practitioners to intervene and nudge the relevant committees to more scrupulous oversight when they visit campuses and assess curricula.

Two of the most important facets of the over-consumption predicament, I have suggested, are the intertwining problems of variety and quantity. Together they constitute a sacred cow, partly because doing interesting new stuff is a challenge dear to many engineers hearts, and also because the current economic system mandates proliferation of newer and more. To challenge the momentum may seem foolhardy, but challenge I think an ethical profession must.


By no means are engineers solely responsible for excessive variety and quantity, of course, nor for overconsumption more generally, but they do play key roles in the treadmill of innovation, production, consumption, and disposal. As industrial designer Victor Papanek expressed the point three decades ago:
There are professions more harmful than industrial design, but only a very few of them. . . Never before in history have grown men sat down and seriously designed electric hairbrushes, rhinestone-covered shoe horns, and mink carpeting for bathrooms, and then drawn up elaborate plans to make and sell these gadgets to millions of people. . . By designing criminally unsafe automobiles that kill or maim nearly one million people around the world each year, by creating whole new species of permanent garbage to clutter up the landscape, and by choosing materials and processes that pollute the air we breathe, designers have become a dangerous breed. And the skills needed in these activities are carefully taught to young people.

He overstates, I would say. And some fields of design and engineering are less vulnerable to criticism than others. Still, there is enough merit in Papaneks claims to warrant reconsideration of how university faculty go about preparing the next generation of technical professionals. One must acknowledge, of course, that many engineers now operate under significant constraints; all but the most heroic, clever, or fortunate probably are limited in what they can do until social, political, and economic changes alter the conditions under which their companies function. But everyone can face up to the dilemma, talk about it with others to raise awareness, and begin to make small changes in hopes of opening the way for larger ones. And if one must behave somewhat unethically in the workplace, s/he perhaps owes compensatory pro-social behavior at home as a consumer and citizen. Moreover, engineering educators have much less justification for neglecting the topic of overconsumption than do practicing engineers. If the engineering profession sometimes has unintentionally colluded in promoting over-consumption and its attendant ecological and social ills, most of the rest of us have done likewise. There is no reasonable alternative


except to understand that everyone works within the confines of an era's blinders, and to forgive. But from here forward that is not an acceptable solution, is it? There is ample information available not to continue blithely endorsing the boundless, technocratic approach to consumer society. Whereas former President Clinton during his final month in office averred that "People are not going to be willing to give up becoming wealthier -- and they shouldn't -- it may be more defensible for those in the world's upper ten percent in income and wealth to begin asking each other, "How much is enough? Virtually every engineer in the U.S., Europe, and Japan is in that upper ten percent, and engineers are "carriers" of the Affluenza disease. Universities historically have been devoted partly to asking mind-expanding questions, not just to preparing young people for the world of work. Although it may be inconvenient and even monetarily risky to challenge endless proliferation of variety, quantity, and other aspects of overconsumption, is that challenge perhaps overdue? Well-known designer and critic Nigel Whitely says in Design for Society that products and other engineered phenomena henceforth should reflect not merely technical competence but "intelligent thought and action. Designers and consumers can no longer plead ignorance. Do you agree, and what steps, if any, would you advocate for engineering education and/or for the practice of engineering?


Chapter 12. Nanoscience and the Privileged Position of Science


chapter uses the case of nanoscience to understand science as a political force. Nanoscience is a classic hot research arena: Graduate students, postdocs, and young researchers rush into each niche as it opens; conferences and professional publications buzz with the latest results; pundits offer glowing predictions of benefits to environment, world hunger, and medicine; government officials generously dole out taxpayers money; and voices even arise to counsel the need for prudent foresight. You will recall from chapter 8 the very different profile of green chemistry: Little scientific interest, research coming nearly a century late, funding niggardly; public attention slight. What can these polar opposite cases teach about contemporary science and its social consequences?

Overview of Nanotechnoscience
Nanoscience and nanotechnology are the art and science of building complex, practical devices with atomic precision, with components measured in nanometers, billionths of a meter. This is not a typical scientific field inasmuch as researchers do not pursue common substantive knowledge: "smallness" is the unifying attribute, so it may be more appropriate to term research at the nanoscale as an approach rather than a field. Indeed, in private some scientists go so far as to suggest that nano functions more as a label to help obtain grant monies than as a coherent set of research activities. Nobel prize winner Richard Feynman is generally credited with calling attention to the possibility of working at the atomic level in a 1959 lecture at Cal Tech titled, Theres Plenty of Room at the Bottom. Most such research now being conducted is relatively mundane, whereas the hype and concern


about nanotechnology are due more to the dramatic notions first presented in then-MIT graduate student K. Eric Drexlers Engines of Creation: The Coming Era of Nanotechnology. This visionary/fictional 1986 account for non-technical readers sketched a manufacturing technology that would construct usable items from scratch by placing individual atoms precisely where the designers wanted. This he contrasted with contemporary manufacturing, which starts with large, preformed chunks of raw materials, and then rather crudely combines, molds, cuts, and otherwise works them into products. The current approach uses far more energy and creates far more waste than molecular manufacturing (MNT) would require. Moreover, Drexlerian molecular manufacturing would become selfsustaining, with tiny factories building tiny factories to build tiny machines. However, because some of these might escape their designers control, Drexler warned that special controls would be needed: Assembler based replicators could beat the most advanced modern organisms.Tough, omnivorous bacteria could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. This warning was reiterated to a larger audience in Bill Joys 2000 Wired magazine article, "Why The Future Doesnt Need Us," describing a world of self-replicating, exponentially proliferating "nanobots" that could drown the planet in an uncontrollable "gray goo." Michael Crichton gave the warning a more explicitly sci-fi spin in his 2002 novel Prey, featuring swarms of intelligent, predatory, nearly unstoppable nanobots. Crichton of course could be dismissed, but it was impossible to brand Bill Joy a Luddite, given his role at the time as chief scientist at Sun Microsystems, and his standing as an architect of the world's information infrastructure. Nevertheless, the nano research communities quickly mobilized like antibodies to neutralize him. More ordinary, but still potentially transformative aspects of nanoscience include faster computing aided by miniaturization and by providing new ways to store information at the atomic level, quantum computing. Some


believe that it may be possible to achieve microchip-like functionality from single molecules, enabling tiny, inexpensive computers with thousands of times more computing capacity than current machines. New materials include carbon nanotubes and other very strong and very light advanced materials. Nanomix Corporation is working to develop "new hydrogen storage systems that will power the fuel cell revolution, by using nanostructured materials to store solid-state hydrogen for automotive and portable power applications" for what is being touted as the coming hydrogen economy. Other endeavors attempt to replicate biological functions with synthetic ones, such as designing and synthesizing organic molecules and supramolecular arrays that can mimic green plants photosynthetic processes perhaps opening the way for solar energy in a more fundamental sense than what the term so far has meant. Most of the research presently is at a pre-commercial stage, although nanoparticles are on the market (e.g., titanium dioxide in sunscreens), and health and environmental concerns around nanoparticles have been a point of contention among environmental groups, business, and government regulatory bodies. Along with tangible investments and research trajectories comes a good deal of hype of the sort that commonly shows up in the early years of new cycles of innovation. Imagine highly specialized machines you ingest, systems for security smaller than a piece of dust and collectively intelligent household appliances and cars. The implications for defense, public safety and health are astounding. Even normally staid government reports burst with promotional fervor, and the Rensselaer web page once said that
Our world is riddled with flaws and limitations. Metals that rust. Plastics that break. Semiconductors that cant conduct any faster. Nanotechnology can make it all better literally by re-engineering the fundamental building blocks of matter. It is one of the most


exciting research areas on the planet, and it may lead to the greatest advances of this century.

Another R&D pathway is nanotechnology applied to biotechnology, nano-bio for short. A mundane example is the use of nanoscale bumps on artificial joints to better mimic natural bone and thereby trick the body into accepting transplants. The pharmaceutical industry presumably will benefit from nanoscale techniques. More generally, as one advocacy organization puts it, Recent developments in nanotechnology are transforming the fields of biosensors, medical devices, diagnostics, high-throughput screening and drug delivery and this is only the beginning! Although relatively routine at present, some of the potential consequences are profound, especially the joining of nanotechnology with biotechnology to blur the dividing line between living and non-living matter. For example, neural implants could make machine intelligence directly available biologically, and tiny machines may live (if that is the right word) in the body either just as sensors or also to treat incipient illness. It is sci fi at present, but seemingly sober researchers speak of devices "as small as the tip of a hypodermic needle" that "could detect thousands of diseases." Critics worry that such innovations may continue the trend of increasing costs for medical care, widening the divide between medical haves and have-nots, and may accentuate tendencies to substitute medicine for a healthy lifestyle. Nanotechnology has been embraced enthusiastically by government officials. The Japanese Ministry of International Trade and Industry in 1992 launched the first major nanoscience initiative funded at what seemed a generous amount -- $185 million over ten years. That has now been dwarfed, with U.S. government support for civilian nanoscience and technology research at approximately $900 million annually under the 21st Century Nanotechnology Research and Development Act, which in 2003 formalized a variety of programs and activities undertaken by the Clinton


Administrations National Nanotechnology Initiative. European Japanese funding likewise is increasing rapidly.


As occurred with the so-called War on Cancer launched by the Nixon Administration, researchers have repackaged their work to jump on the nanotech funding bandwagon. Many are actually revising their research interests as they become involved in nanoscience, and new graduate students and postdoctoral researchers of course are able to move into the hot new areas of study without much loss of accumulated intellectual capital. This is partly a response to ready availability of funding, but anyone who has been around nanoscientists can attest to their genuine enthusiasm. Conferences abound devoted specifically to nanoscience and nanotechnology, and more general international scientific meetings include increasing numbers of papers with nano themes and methods. Nanotechnology relationships are tight among university, industry, and government, continuing the triple-helix trend of the past generation. IBM and Xerox are among an increasing number of large corporations engaged in nanotechnology R&D, and start-up firms hoping to mimic the explosive success of Silicon Valley are racing to get products onto the market. Carbon Nanotechnologies, Inc., for example, has claimed to be a "world leading producer of single-wall carbon nanotubes . . . the stiffest, strongest, and toughest fibers known." Their most advanced product, "BuckyPlusTM Fluorinated Single-wall Carbon Nanotubes," was selling for $900 per gram at one point, many times the price of gold. In sum, nanoscience and technology R&D consists of myriad minor, relatively useful and perhaps harmless trajectories combined just about inextricably with some fascinating, potentially helpful, and potentially disastrous radical innovations.


Government, Business, and Science

What makes one research area so compelling while other areas are ignored? In green chemistry, why the belated push on supercritical fluids and solvent replacement, yet still virtually no effort to use medicinal chemistry principles for industrial chemical design? Why nano-bio, nano computing, and many other facets of nanoscale inquiry, yet much less support for what seems like the most important potential, molecular manufacturing? The commercial possibilities for nanotechnology seem uppermost in the minds of government officials. This appears to be a manifestation of an almost religious beliefin the powers of science-based technology. (O)nly scientific and technological supremacy over the rest of the world will allow the country to prosper economically. The 21st Century Nanotechnology Act mandates ongoing reporting and priority setting on what needs to be done to keep the U.S. competitive in nanotechnology commerce. As a first approximation, it seems a safe bet that major technoscientific initiatives fostering the interests of political-economic elites are more likely to succeed than those that do not. The solicitous attention from elected officials comes about partly because of the privileged position that the business sector enjoys in what are known as market-oriented societies, but that might equally well be termed elite-interest societies. Business executives occupy a role unlike that of any other social interest, in part because they are structurally located to make key economic decisions including creating jobs, choosing industrial plant and equipment, and deciding which new products to develop and market. Many direct and indirect supports, interferences, and other partly reciprocal connections between science and business of course arise during this process. Sometimes referred to incorrectly as the private sector, business actually performs many tasks that are public in the sense that they matter greatly to almost everyone. Even if no business executive or lobbyist ever interacted in any way with government officials, business would be highly political in the sense of exercising influence over key public choices. However, business of


course also is privileged in a second sense, in that executives have unrivaled funds, access, organization, and expertise to deploy in efforts to influence government officials. Although the connection with business and government in a system of political-economic power is crucial to understanding scientists privileged position, government officials willingness to splurge on nano is due in part to the fact that the five myths of science analyzed by Dan Sarewitz are pretty much alive and well in the halls of Congress and in government corridors throughout the world. These include The myth of infinite benefit: More science and more technology will lead to more public good, The myth of accountability: Peer review, reproducibility of results, and other controls on the quality of scientific research embody the principal ethical responsibilities of the research system, and The myth of the endless frontier: New knowledge generated at the frontiers of science is autonomous from its moral and practical consequences in society. There has been considerable analysis of changes in university-industry relations in recent decades, with talk of triple helixes and new forms of intellectual property and so forth. Sufficient for present purposes, however, is an utterly obvious and simple fact: More chemists work with and for industry than for any other social institution, and the same is true, or soon and other government programs explicitly tout the economic payoff and give grants partly on the basis of the projected societal benefit. Thus, the starting point for understanding science as part of a system of power is to acknowledge that those with authority in business and government make use of scientists to the extent they find it convenient, profitable, or otherwise in line with their own aspirations. Sometimes this takes a straightforward route, with members of a relatively small, like-minded elitejustifying their power over important scientific and technological choices by referring to the need for an efficient response to both commercial and military threats. Getting a jump on foreign


competition is surely in the minds of some legislators voting to support nanoscience, and hesitancy on green chemistry has something to do with excess capacity in the world chemical industry, enormous sunk costs in plant and equipment, and declining fortunes of U.S. chemical firms. Conversely, there is no question but that brown chemistry was extremely useful to businesses, and those deploying the chemicals had little motivation to worry about long-term health and ecosystem effects. Although a full replay of the 20th-century cowboy economy is unlikely given heightened environmental awareness, businesses exploring nanoscale manufacturing or new products utilizing nanoscience are likely to find that they can delay or water down proposed regulations. Although U.S. companies are under greater pressure than those in most other countries to show quarter-by-quarter profits, there are not many business executives who would willingly forego near- and middle-term profits even if it results in longer-term problems for society. For example, diagnosis of ill health made possible by tiny, ingestible sensors may increase medical costs and rates of iatrogenic illnesses (health problems caused by diagnostic and treatment efforts). By inducing physicians and patients to intervene where they previously would not; but the company manufacturing the new sensor can nevertheless make a profit. If nanoscale surveillance makes privacy even less protected than is now the case, the problem will be borne largely by persons other than executives of the relevant companies. Hence, the emerging technical potentials can be useful to business and government even if in some larger, longer-term sense they cause more problems than they solve.

The Controversy over Molecular Manufacturing

Mainstream nanotechnology leaders at the US National Science Foundation and elsewhere have worked to downplay the potential for the more radical innovations associated with molecular self-assembly. Thus, a semifinal draft


of the 21st Century Nanotechnology Act called for the National Research Council to submit to Congress a review of the technical feasibility of molecular self-assembly for the manufacture of materials and devices at the molecular scale, and to assess the need for standards and strategies for ensuring responsible development of self-replicating nanoscale devices. However, this was substantially watered down in the final version of the bill, and Christine Peterson of the Foresight Institute blamed the deletion on entrenched interests." "That's sad, Peterson said; immense payoffs for medicine, the environment and national security are being delayed by politics." The politics to which she refers is occurring largely within the technoscientific community. Nobel-prize winner Richard Smalley dismisses both the promises and the problems associated with molecular manufacturing: "My advice is, don't worry about self-replication nanobots. . . It's not real now and will never be in the future." Smalley, developer of carbon nanotubes and a major force in the U.S. National Nanotechnology Initiative, says the necessary chemistry simply will never be available. His argument, presented in a number of highly visible publications including a cover story of Chemical and Engineering News, holds that there is no way to place atoms or molecules sufficiently precisely. Disagreeing with him point by point in a series of published letters is Drexler, now the main force behind the Foresight Institute, which aims to help prepare humanity for the nanotechnology era he believes is coming. Those who count within the nanoscience community tend to side with Smalley in dismissing Drexlers view. U.S. nanotechnology czar Mihail Roco might have reason to downplay the more radical potentials, for fear of provoking the kind of resistance that agricultural biotechnology has encountered in Europe. Although Congress ultimately defeated a requirement to set aside for ethical, legal, and social issues five percent of the $3.6 billion nanotechnology authorization, the House Science Committee inserted a requirement for public consultation in the 2003


nanotechnology legislation precisely to try to head off a fate such as occurred with genetically modified organisms (GMOs) in Europe. More generally, government funding and industry interest both depend in part on public quiescence, on treating nano capacities as ordinary, boring science and technology, rather than as an issue similar to GMOs or nuclear power deserving intense, widespread scrutiny. However, it is not as clear why so many others are willing to go along in dismissing molecular manufacturing, with even Greenpeace issuing a pretty tame report on nanotechnology as a public issue. The environmental organization that did a great deal to bring the GMO issue onto public agendas, etc Group, called in 2004 for a moratorium on certain nanotechnology research and diffusion, but the idea of a moratorium or any special precautions has not won a following. Within the NSF sphere, Roco and his allies as of this writing were managing to keep the controversy over molecular manufacturing entirely off the table in public meetings sponsored by the NNI. Not only presenters but even audience members asking questions at such meetings somehow get the quiet message that retaining ones credibility requires not discussing molecular manufacturing. I have asked participants and other observers how that message is telegraphed, and the answers are not very illuminating. Everyone just knows not to talk about it. As a political scientist, I have no professional opinion about which of the technoscientists is correct. I do notice that the anti-Drexlerian arguments shift quite a bit over time as if earlier arguments were found wanting. And I notice that Smalley, Whitehead, and Roco utilize assertion and flamboyant language, in a way that reminds me of other realms of politics more than science. When they do mount an argument, metaphors such as slippery fingers play a larger role than equations or well-established principles such as the second law of thermodynamics. One must acknowledge that it is tough to be a pure scientist when debating futuristic potentials; still, there is a somewhat eerie resonance with many previous arguments about what technical innovation cannot do from flying across the Atlantic to open


heart surgery. Hence, it probably is premature to dismiss the possibility of molecular manufacturing, for good and/or ill. In fact, by some definitions the phenomenon is already here. The fast-growing company known as 3D Systems and its competitors are far away from anything like self-replication, but they arguably are the forerunners of the molecular manufacturing that Drexler foresaw. Surgeons have taken CAT scans of leg and jaw bones for replacement surgery, then downloaded the digitized information to a 3D printing company, which then used powdered, melted titanium to shape the bone layer by wafer-thin layer. That of course required highly specialized, expensive equipment and know how, so it does not necessarily mean that you and I soon will be manufacturing our own three-dimensional coffee makers and other products in our homes. What the printers can do is expanding year by year, however, and the cost is coming down: A low-end model released in 2012 cost less than $2000. The possible implications for manufacturing employment are impossible to estimate, but it is at least conceivable that Chinese factories and hence the employment of a hundred million or more could be negatively impacted. On the other hand, if 3D printing capacities proceed as the enthusiasts imagine, then all sorts of new possibilities are likely to come with the enhanced capacities. At present, this potentially epochal innovation is being shaped and diffused almost entirely by relatively small entrepreneurs and their customers. The technology is still malleable, so it would be an opportune time to use Intelligent Trial and Error to hold early inquiry and debate, to institute appropriate precautions, and to arrange for close monitoring and rapid learning from experience. No such proposal has emerged either from political candidates or from engineering societies, most of whom no doubt are blissfully unaware of what could be on the horizon.

The Privileged Position of Science


Interestingly, the 3D advancements are occurring without much dependence on the fancy nanoscience that is garnering so much attention. Nevertheless, the ordinary trial-and-error tinkering in start-up 3D printing companies could never have come about without contributions from previous generations of scientists, engineers, entrepreneurs, and customers. When the gee-whiz or fairly ordinary new gadgets enter commerce, scientists rarely are front and center. Science thus is not as directly central to daily life as is business, and scientists often lack the monetary inducements deployed by business executives; nevertheless, there are some interesting and important parallels with the privileged position of business. Thus, just as public well being depends on what is called a healthy business sector, so has technological civilization come to rely on scientists to conduct inquiries into matters beyond most peoples competence, train future generations of technical specialists, and perform functions at the interface between scientific knowledge and technological innovation. As scientists link with business, they obtain certain privileges. The linkages with business and government obviously magnify the influence of scientists on the social construction of everyday life, but it would be a serious mistake to suppose that the scientists are merely the handmaidens in this relationship. Many technoscientific researchers originate and undertake their inquiries to a greater extent than that inquiry is foisted upon them. It was not consumers or workers or business executives or government officials who chose not to pursue green chemistry none of them had an inkling of how molecules are put together. The same goes for nanotechnology: Feynman the instigator and most other contributors to the discussion have been technoscientists, except for public figures touting and voting to fund the new initiatives (which hardly any of them much understand). If anyone chose, it was the chemists although I doubt whether choosing is the best term to describe the complex sociotechnical processes that led to brown chemistry and may be leading to new ways of working with molecules and atoms.


Whoever may be winning in the scientific communitys internal battles, if scientists are exerting broader social power in the nanoscience and brown/green chemistry cases, what is the nature of the influence? I do not refer to ordinary interpersonal relations in the laboratory or battles for influence within subfields, but to the world-shaping influence exercised by the development and deployment of new knowledge or the failure to develop and deploy it. Influence of this sort does not normally take the form: A has power over B when B does what A chooses. Such a formulation probably is too simple for understanding influence relations in any complex social situation, but is especially questionable when it comes to the cascading series of unintended consequences that followed from chemists inquiries into organic chemistry, and that may follow from nanoscientists current enthusiasms. Making sense of science as a social phenomenon requires some way of looking at scientific influence that is consonant with the bull-in-the-china-shop or sorcerers-apprentice character of the havoc that science-based practices sometimes wreak. Ones thinking must simultaneously allow room for the nearly magical and extremely useful outcomes sometimes catalyzed by scientific knowledge, as well as for the fact that a high percentage of scientific work reaches and benefits/harms no one at all outside of a few researchers in a subfield. Suppose we were to suspend certain assumptions about the inevitability and rightness and naturalness of science as now conducted, backing far enough away to look afresh at the whole set-up. Is there perhaps a sense in which it is a bit weird to have quietly allowed 20th-century chemists and chemical engineers to go along their merry ways synthesizing tens of thousands of new substances that the world had never before experienced? And weirder yet that they were paid and otherwise encouraged to help produce and distribute billions of tons of substances that from some points of view can be characterized as poisons or even as chemical weapons? From our present vantage point, knowing about the potentials of green chemistry, is it not


somewhat strange that chemists (more than anyone else) essentially chose not to activate these potentials earlier a lot earlier? With respect to nanoscience and technology, the dangers awaiting may or may not rival endocrine disruption and environmental cancer and species extinctions and the other effects of synthetic organic chemicals. However, it is pretty difficult to miss the fact that the nanotechnologists are busy creating nontrivial threats, some of which could be unprecedented: As Bill McKibben puts it, looking not just at nanotechnology but also at human biotechnology and related technologies, a central question facing humanity in this century is whether by the end of it we still will be human. Despite the risks, the nano-scientists are about as free of external restraint as were the brown chemists of a century ago. Putting aside the assumption that it is natural for scientists to pursue science makes it possible to consider whether something peculiar is going on, something not unlike children playing with matches while the grownups are away except, in this case, it is not clear who the grownups are. Again, however, where is power in this story? Scientists are not powerful in the ordinary sense of the word, yet they collectively have very substantial effects. A conventional way of thinking about the hazards created by technoscientists is in terms of unintended consequences, especially secondary and tertiary consequences. Even among the relatively few who openly acknowledge that new technoscientific endeavors are bound to entail a small, medium, or large fraction of negative outcomes that cannot be foreseen, the mainstream view is to treat this as just a matter to be regretted, not a problem to be tackled. Before accepting that fatalism, one might wish to ask a few questions: What is the nature of the power relationships around unintended consequences? Who can be construed as making the decisions? How might things be different? Normally, the implicit answer is: No one; vector outcomes just emerge. However, as Langdon Winner says, unintended consequences are not not


intended. Someone is at least implicitly deciding to go ahead even though going ahead entails unintended consequences -- some of which are likely to be negative, and perhaps quite potent. Whoever is participating in that choice can be said to be exercising authority over others, inasmuch as those ultimately suffering (or enjoying) the unforeseen results would not be doing so without those making the original choice having decided to proceed. (That is a string of logic you may wish to reread and think a bit about.) Many scientists and others with influence over unintended consequences would claim to be operating within a legitimate political-economic order, and would claim that they have the consent required to proceed. Under contemporary law, that is correct. In every other way, however, the argument is pretty silly, inasmuch as many of the future victims/beneficiaries are not yet born, live in countries other than the ones catalyzing the R&D, are utterly ignorant of the whole business, are outvoted by those who want relatively unfettered technoscience, have been led to suppose the activities entirely safe, or otherwise cannot meaningfully be said to have given consent. My purpose here is not moral blame or philosophical inquiry; it is merely to point out that enormous authority is being exercised in choosing to proceed with potent new technoscientific capacities given the certainty of unintended consequences and the possibility of highly negative ones. Thoughtful members of a technological civilization would face up to the authority that has intentionally or unintentionally been delegated to scientists, rather than assuming it away as do most technoscientists, their allies, and even some of their opponents. A closely related exercise of power is that which goes into shaping agendas and shaping the governing mentalities via which both influentials and non-influentials think about issues. This is one of the most important ways that power manifests, frequently leading to non-decisions rather than manifesting in outright controversies. (For example, when the European Commission decides not to pass nanotechnology regulations year after year, that is a kind of non-decision that really amounts to a decision.)


Technoscience excels at shaping human thought and behavior by opening up new possibilities that gradually or quickly come to be perceived as necessities. Thus, chlorine chemists and their scientific knowledge helped set the agenda for many environmental problems: PCBs in the Hudson River are polychlorinated biphenols; DDT, dieldrin, and aldrin are chlorinated pesticides; CFCs that deplete the ozone layer are chlorofluorocarbons. Industry actually manufactured the chemicals, of course, and millions purchased them, so chemists did not act alone; but they certainly created the potential that set the agenda. In doing so, the chemists inadvertently helped teach humanity (including future chemists) that unintended consequences are a normal part of technological innovation, part of what is termed the price of progress. As soon as a new capacity is developed, many people come to consider it unthinkable to go back to an earlier state whether that is to a chemical state prior to organo-chlorines, or (soon?) back to a primitive era when technoscientists and industry did not manipulate matter at the nanoscale. More subtly, in the 20th century, the potentials of brown chemistry and chemical engineering were so technically sweet and so agriculturally useful in combating pests that it was unthinkable to require that chemicals be proven safe prior to introducing megaton quantities into the ecosystem. Creating unthinkability of these kinds arguably is the most subtle and most potent form of technoscientific power. One of the main reasons that technoscientists can do this time after time, while maintaining their privileged position, is that they have considerable legitimacy and credibility in the eyes of most persons. Scientists in films sometimes are socially awkward and preoccupied with matters far from most peoples everyday realities, but with the exception of renegades such as the dinosaur-cloner in Jurassic Park, scientists normally are depicted as innocuous or as proceeding within acceptable norms. There are some aberrant situations such as the Tuskegee syphilis experiments that came to be universally condemned. And there are some epochal events that lead even the


scientists involved to ponder what they have done, such as occurred with the atomic bomb. But these are rare events; almost no one questions whether scientists have a right to pursue the research they do even if, on reflection, one might be hard-pressed to explain exactly from whence that right emanates. Similarly, although many people experience skepticism or impatience when expert opposes expert in testimony at a trial, in the absence of such conflict most of us are disposed to assume that technoscientific experts pretty much know what they are doing within their fields of endeavor. Restating that seemingly sensible and harmless assumption darkens its implications: University chemical and nanotechnology researchers going about their normal business are not questioned closely by outsiders, and hence are not very accountable for their research or teaching (except, to a degree, to other members of their subfields). This helps insulate the professors from undue external influence except perhaps from those awarding grants and contracts but it also partially insulates them from appropriate external influence. Which verbs to use for describing the influence relationships?: Have the scientists seized authority, or been delegated it? Are they imposing outcomes, or suggesting trajectories? Do they decide, or do they negotiate? My own sense is that many different influence-oriented verbs apply at various times to various facets of technoscientific outcomes, and that the influence relationships are ineffably messy. What can be said with clarity is that some technoscientists have considerable latitude in their actions, that interactions produce outcomes that sometimes have considerable impact on the world outside of science, and that technological civilization may lack both the discursive and institutional resources for holding the influentials accountable. Discussion Although those wielding scientific knowledge clearly exert enormous influence, the nanotechnology and green chemistry cases provide little or no evidence that scientists as individuals or as identifiable groups have the


power to defeat other elites when there is manifest conflict. Military, industry, and government elites often give technoscientists quite a bit, of course; when there is overt bending to be done, however, it tends to be the researchers who do it. They want government or industry funding; they lack the resources to evade or fight new government regulations; universities cannot operate without business and governmental largesse; as employees the scientists and engineers are simply following the bosss orders. Any one of those rationales will lead to bending, and the combination is of course even more potent. Further reducing any power in the conventional sense that might be imputed to scientists is the fact that to a substantial degree they are creatures of the cultural assumptions of their societies. They think and act the ways they do partly because they are cognitive victims of their cultures: They simply have not gotten much help in revisioning what chemistry could be, nor in recognizing that rapid R&D and scale-up often prove problematic, nor in thinking through the manifestly undemocratic implications of the privileged position of science. If the mass media focus on conventional problems such as endangered species and chemical spills, rather than on restructuring molecules, scientists and engineers are among those whose mental landscapes become dominated by a focus on symptoms, who become unable to rethink the underlying causalities. If emerging or speculative gee-whiz capacities predominate in stories about technological futures, it is not only the attentive public whose thinking is thereby misshapen; technoscientists thinking likewise is stunted. For example, as of this writing, to my knowledge no mass media source or semi-popular science publication has ever pointed to the obvious similarity between nanoscience and Green Chemistry: both are about rearranging atoms and molecules to supposedly better serve (some) humans purposes. Although chemists ought to be able to figure that out for themselves, in most respects they are just ordinary people operating according to standard procedures and cognitive schemas. Neither the cadre of chemistry-ignorant


journalists nor the rest of technological civilization has given chemists much assistance in breaking out of those molds. It may seem strange to say that chemists need to be informed by non-chemists, but it is a fact: As a vice president of Shaw Carpets, a business executive without an advanced degree, expressed the point at a congressional hearing where I also testified: I am the guy whose job it is to tell the chemists and chemical engineers that they can make carpets using environmentally friendly processes. I have to do that because they come to the job from universities where they have not learned about chemical greening. That kind of insistence has been altogether missing in the contexts in which most chemical R&D personnel have worked for the past century; the equivalent cuing is missing in the contexts now populated by nantechnoscientists. That said, it nevertheless is worth reiterating that both technoscientists failures to actively pursue green chemistry and their possibly misguided rush toward the nanoscale are occurring without the knowledge or consent of the vast majority of those potentially affected. Some people and organizations are exercising discretion on behalf of, or against, others, and hardly anyone seems troubled by the fact. Scientists are content to proceed without consent partly because it often is in their interests, but also because they have not learned to do otherwise the social order has not arranged to teach them. Thus, Green Chemistry leader Professor Terry Collins of Carnegie Mellon insists on chemistry ethics being central to the curriculum. I doubt whether that is nearly enough, because not only do business-oriented incentive systems give strong motivation for setting ethics aside, but legacy thinking subtly impairs just about everyones capacities for thinking straight about emerging technoscience. One learns to accept that private businesses have a right to do whatever will sell profitably; learns to accept scientific research as the equivalent of free speech rather than as just another social activity subject to periodic renegotiation; learns to believe that unlimited technological innovation is both inevitable and highly desirable; and learns to assume


that unintended consequences are just a regrettable fact of life rather than a design challenge to be taken on. These assumptions or myths align in ways that accede legitimacy and accord capacity to scientists and their allies, and they impair scientists thinking and practice along with that of everyone else. In applying the above insights more specifically to the blocking of green chemistry and the acceleration of nanoscience, it makes sense to think in terms of a combination of technological, economic, and cognitive momentum. Brown chemists and their industrial allies and consumers of chemicals could be foregrounded as the decision makers, but it might be more accurate to de-individuate by saying that green chemistry has been marginalized by brown chemistry. This is true in a literal sense, for textbooks brimming with brown chemical formulas make it difficult for green chemical ideas to find a place in the curriculum or laboratory. But it also is true in a more socially complex sense: Chemists have been trained in brown chemistry; industry accountants are looking at huge sunk costs in brown chemical plant and equipment; and the momentum itself reinforces assumptions that obscure the possibility or even desirability of an alternative. Moreover, government regulatory procedures and laws are set up negatively -- to limit the damage of brown chemicals -- instead of positively and actively seeking the reconstruction of chemicals to be benign by design. Similar social processes are likely to interfere with sensible governance of nanosciences emerging potentials. The blending of structural position, quiescent public, and habits of thought weaves together fits into a complex play of power. A simplified, condensed story of brown/ green chemistry and nanotechnology might run along these lines:
1. Some technologists glimpse new capacities and begin to develop them. 2. Political-economic elites come to believe that these may serve their purposes, and make additional funding available.


3. Other technoscientists move into the emerging fields, partly in order to obtain the new funds. 4. The relevant scientific communities proceed to develop new capacities at a rapid pace. 5. Some of the new capacities then are scaled up and diffused by industry (and sometimes by government, especially regarding military applications). 6. Given the relatively slow learning from experience that humans and their organizations know how to do, journalists, interest groups, government regulators, and the public cannot learn fast enough about the new capacities and problems to institute precisely targeted protective measures in time. 7. And, in fact, many people actually have no disposition to interfere, because they have been taught that You cant stop progress.

The green chemistry and nanotechnology cases offer an opportunity to reflect regarding which decisions appropriately can be left to scientists, which to industry and markets, and which really deserve broader scrutiny and deliberation by media, public interest groups, independent scholars, government officials, and the public. How can scrutiny come early enough, how can it be sufficiently informed and thoughtful, and by what institutional mechanisms? Considering how profoundly scientific understanding, and lack of it, has affected everyday life in the case of synthetic organic chemistry, and probably will do so in the case of nanoscience, it seems to me that many people need to think harder about the constitution of technological civilization. For scientific knowledge and technical know-how are not mere tools; they actually help constitute and reconstitute social structures and behaviors. Because decisions about technoscience have transformed everyday life at least as profoundly as what governments do, those used to thinking of voting and traditional government as the be-all and end-all of


democracy might want to look more closely at the authority relations embedded in the R&D system, innovations that lead to fundamental changes in the ways people spend their time, money, and attention. As Winner puts it, technology actually is a form of legislation, inasmuch as Innovations are similar to legislative acts or political foundings that establish a framework for public order that will endure over many generations. In deciding that the built and natural environment would be shaped by plastics and other chemical products, governments were involved, certainly, but chemists and chemical engineers in conjunction with business executives arguably were the primary policy makers. Decision making, or non-decision making, of that sort now is shaping the future of nanofabrication and other emerging nanotechnoscience. If experts are to participate more helpfully in the future in nongovernmental (as well as governmental) policy making, nontrivial revisions in the social relations of expertise almost certainly would be required. At present, scientists arguably work according to agendas that are at least partly illegitimate -- shaped without sufficiently broad negotiation, oriented substantially toward purposes many people would find indefensible if they understood them, and shaped without sufficient attention to the requisites for acting prudently in the face of high uncertainty. It is impossible to say how much of the power rests with scientists, how much with political-economic elites, and how much with systemic forces (if such actually can be separated out from those in the powerful roles), because the forces, vectors, and sectors influence one another in ways too complex to parse. What we can say with some assurance, as David Dickson expressed the point two decades ago, is that science is a powerful tool that can help us understand the natural universe in potentially useful ways, but at the same time carries the seeds of human exploitation. How to tap the one without falling victim to the other is the key challenge in guiding scientific inquiry and its utilization toward justifiable public purposes. Ceding authority to scientists and their allies is often a serious mistake, one that is


in direct conflict with the updated maxim from the American Revolution No innovation without representation.


Chapter 13. Can/Should Technoscientists Do More to Promote Fairness?

Myths of scientific progress essentially say, Science is good for humanity,

and more science is always better than less. It is almost unthinkable to inquire into whether the claim is accurate. This chapter boldly or foolishly takes on just such a task: Is there reason for concern that the directions, pace, products, and consequences of science might contribute to unfairness? How easy would it be for scientists to guard against that? What, if anything, stands in the way of directing scientific attention to the full spectrum of diverse humans' needs?

Beginning to Think About Technoscience and Fairness

An example of unequal attention may help anchor the inquiry. Biomedical research priorities are skewed away from the most severe health deficiencies, those of the poorest people in poor nations. Instead, the health problems of affluent populations get the lions share of attention despite the fact that these already are among the worlds healthiest people. This phenomenon is known as the 10-90 problem: Less than ten percent of health research worldwide is aimed at problems accounting for over ninety percent of the global burden of disease. This example is especially striking,

My thanks to Daniel Sarewitz for stimulating my thinking on the problems covered in this chapter; see our coauthored article


but is it perhaps somewhat representative of how technoscientific attention is allocated in many realms of inquiry? One reason for thinking that disparities in attention might not be unusual is that technoscientific research is expensive, and it is the affluent nations that can best afford it. Moreover, when science-intensive pharmaceuticals and other products are commercialized, they typically are sold mainly to the affluent. So why would profit-oriented companies pay for research aimed at those who could not afford their products? It is of course possible that the funders refuse to pay for research that many scientists really want to do, but one does not hear much of an uproar to that effect from university researchers (who would be more likely than industry technoscientists to be able to get away with complaining). In fact, the scientific community generally seems rather complacent about who gets what from science -- so long as funding keeps increasing for their labs, equipment, graduate students, technicians, postdocs, and conference travel. Some of that probably is simple self interest, but even the best intentioned researchers may suppose it to be someone else's job to take care of global inequality. They may believe that the poor simply need more money so that they can afford the benefits of technoscience, they do not need different scientific research or technological R&D. If inequality is a political and economic issue that should be solved by others, researchers can simply continue doing their thing. This explanation is partly inference; what is directly observable is that relatively few of those shaping technoscientific research and R&D speak or write as if they are trying to use their work as a lever to assist the worlds have-nots. Another reason for worrying that technoscience might tend toward unfairness is that the ratio of private to public investment in science has been increasing rapidly over the past several decades, especially in the U.S. where a plurality of the worlds R&D is conducted. In other words, a rising proportion of the worlds science agenda responds to the priorities of


business corporations and their relatively affluent customers. There also is increasing privatization of publicly funded scientific knowledge through intellectual property regimes such as patents and university-based start-up companies; this is encouraged partly because elected officials think that helping business is the best way to create jobs and compete economically with foreign countries. A third reason for expecting technoscientific inquiry sometimes to lead to increased inequality is that job requirements shift in response to innovation. Millions of factory jobs obviously have been "lost" to China, phone inquiries and complaint from consumers routinely are routed internationally to call centers in India, and businesses from the European Union, the U.S., and other affluent regions increasingly shift their investments to Asia, Latin America, and other areas with lower wages, more favorable tax regimes, and other advantages. Many jobs of course have not been shifted, but those with the right education and skills are best positioned to benefit in the so-called knowledge economy. By contrast, those without college degrees in computing or other in-demand fields often end up taking low-wage jobs in the information-technology-enabled service sector. Technoscientists do not directly design economic opportunity structures, of course, but their work has been crucial in changing the kinds of jobs available a process in which some win and some lose. Fourth, the technological sophistication of the U.S. military means that casualties during warfare increasingly are borne by the losing side. In the 1991 Persian Gulf War, Iraqi casualties were roughly a thousand times greater than those of the U.S. military. Much of the history of technological innovation can be written in terms of military competition, of course, and creating inequity by reducing ones own casualties and increasing those of the other side is precisely the goal. Nevertheless, given that the U.S. spends about three times more than the next six national powers combined on military R&D, one can hardly avoid acknowledging this sector as an important source of inequity.


How Might Technoscientists Seek More Egalitarian Outcomes?

While acknowledging that inquiry and innovation sometimes contributes to inequality, is it nevertheless possible to identify some major categories of social life where science may tend to reduce unfairness? 1. More R&D Focused on Poor Peoples Problems? To the extent that poor or disenfranchised people disproportionately face serious problems that could be reduced through science, focusing more scientific research on such problems ought to be equity enhancing. Given that resources for conducting research are concentrated in affluent countries, it would take some combination of enlightened self interest and genuine altruism for affluent-country technoscientists and funding agencies to redirect resources and energies. And it difficult to foresee what forces might bring that about. The global biomedical arena offers a glimpse of how this conceivably could become feasible. A considerable confluence of high-level, yet non-traditional players have emerged ranging from the United Nations and the World Health Organization, to the Rockefeller Foundation and Bill Gates. The Gates Foundation almost single-handedly created the worlds contemporary focus on conquering malaria. One may doubt that this phenomenon is likely to be replicated for other worthy problems, but it clearly is an approach to equity-oriented science that deserves serious attention. Reprioritizing also could be done in ways that address problems with indirect impacts on equity. For example, global dependence on oil and coal for energy, chemicals, and materials exacts a substantial toll on the poor. They live closer to major sources of pollution, they have less ability to pay high prices for energy, warfare tends to affect them disproportionately both


in the sense of becoming soldiers and in the sense of suffering the collateral damage of war. Most knowledgeable observers believe that the world is underinvested in energy R&D, and inquiry/ innovation applied to replacing coal and oil with cleaner, less geopolitically potent technologies could contribute to global equity. Why energy R&D has not received the public profile or level of investment of biomedical science is rather mysterious; nor is it clear why equity considerations play such a small role in energy conversations. 2. Focusing science on the creation of public goods? A second hope for fairness-oriented technoscience is in creation of public goods (e.g., parks) that can be accessed by all but are paid by tax monies that come more from affluent tax payers. Public goods also tend to be available without cost to the user, or at least are subsidized, so access to them should be more equitable than goods and services available through ordinary buying and selling. Some types of environmental research and innovation produce cleaner air and water, for example, and the less well-off tend to suffer disproportionately from polluted air, water, or soils, or an unhealthy built environment (e.g., lead paint; asbestos insulation). But the matter is far from straightforward, because, for example, well-off people are likely to have more leisure time and other resources required to benefit from some environmental public goods, such as restored or natural wildlands. Caution about the advantages of public goods is warranted also because publicly funded scientific inquiry does not always remain a public good. Much has been made of intellectual property claims by universities and by faculty with businesses on the side, but probably more important is the routine utilization of public-funded science by businesses. Federally funded research on fungi and mold, ionization, and related topics now are showing up in products used disproportionately by the affluent. Consider this ad in Forbes magazine for the Toyota Camry: "Any car can have a navigation system, but what about an immune system?"


A new HVAC system...uses Plasmacluster ionizer technology to help reduce airborne mold spores, microbes, fungi, odors, germs and bacteria inside the passenger cabin. The plasmacluster ionizer does this by artificially creating positive and negative ions that seek out and surround harmful airborne substances. The system also features a micro dust and pollen filter, along with an antibacterial coating designed to minimize the growth of mold spores.

Especially in light of the rising incidence of asthma and allergies among children, is science helping to turn what arguably should be a public good clean air into a scarce, private resource available disproportionately to purchasers of expensive vehicles? Is the new technology likely to turn up anytime soon on mass transit used by the less affluent? Certainly some products of science, such as polio vaccines, can be viewed as de facto public goods that were equitably distributed. To the list of examples of fairly distributed public goods some people would add the intangible benefits of space exploration or national military security and the more concrete benefits of transportation infrastructure. Others would argue that some of these are public bads, not goods. But in either case, the distributional inequities do not loom large. Even in a market-oriented society, there could be ways of increasing the fraction of science aimed at public goods. More research and higher standard setting for environmental quality and consumer protection could be considered a public good, inasmuch as government pays and less well-off people are likely to benefit. Making automobiles less repair-prone and easier/cheaper to repair could be considered a public good deserving of government-supported R&D, for example. Even though it would benefit everyone, the marginal contribution to poorer drivers would be greater. One must admit, however, that the above examples do not seem like the kinds of research that real scientists would want to pursue.

3. Reducing Inequity by Reducing Price?


When less well-off people can more easily purchase products already consumed by the affluent, equity is served. Discrepancies between haves and have-nots are reduced, and the marginal benefit of increased affordability often will be greatest for the poorest. Technoscience certainly has played an important role in increasing affordability of many products (e.g., plastics). Unfortunately, it is hard to think of many instances where the changes have proven purely desirable. Thus, agricultural science has been enormously successful at increasing productivity and decreasing the direct costs of food. Real prices (adjusted for inflation) for most agricultural commodities have declined over the past four decades, with cereals and tropical beverages falling the most, meat and dairy falling the least. In fact, according to the United Nations Food and Agriculture Organization, agricultural research and other agriculture policies have been so successful in stimulating supply that
Government policies in both developed and developing countries have seriously distorted the over-supply problem in agricultural markets.The vast majority of the worlds poor and hungry, who live in rural areas of developing countries and depend on agriculture, suffer losses in income and employment caused by declining commodity prices which generally outweigh the benefits of lower food prices.

The agricultural sciences have also been implicated in environmental degradation including over-emphasis on chemical inputs, and in sociocultural destabilization of U.S. farm communities and rural social structures in many poor countries. Ample food supplies coupled with low prices also have helped drive peasant farmers to give up their traditional ways of life and move to overcrowded cities with unsafe sanitation and lack of adequate drinking water. Agricultural policy thus cannot be held up as generally exemplary, but it nevertheless merits attention in one respect. In contrast with many areas of the sciences, agricultural research has generally not been of an elitist sort. Rather, agricultural R&D has often been conducted at institutions with close connections to local farmers, and there have been fairly good connections


between forefront researchers and agricultural extension agents who educate farmers about how to improve crop yields, maintain good soil conditions, and avoid polluting water supplies. There probably are lessons there for other domains of technoscience. Beyond agriculture, innovation processes have led to decreasing prices for technologically enabled consumer products from plastic bags to televisions to washing machines. Most of the incremental innovation takes place in so-called private businesses, but it often takes off from publicly funded materials science (car tires), chemistry (fire retardants in televisions), and electricity storage (Ni-Cad batteries). Scientific research helps also with health and safety regulations that may assist ordinary workers more than white-collar engineers and other professionals. Even the rise of mass-market retailing exemplified by Wal-Mart has been built partly on a foundation of scanners and bar codes, containerization, and electronic data bases, all of which derived partly from military or other governmentally funded R&D. As was true of agricultural innovation, the R&D that contributed to greater productivity in manufacture of consumer goods also disrupted the roles of skilled factory workers who once had jobs that paid well for those without higher education. Whether the dispersed equity benefits of cheaper consumer goods from science-enabled productivity outweigh the concentrated and severe inequities suffered by displaced workers is hard to calculate and inherently contestable. 4) Greater Honesty about Technoscience and Equity? Support for scientific research is often justified, explicitly or implicitly, in terms of its ability to enhance societal equity, for example by lowering the costs of medical treatment, or solving problems disproportionately borne by poor people. Agricultural biotechnology has long been promoted as a solution to developing world food problems; nanotechnology is now being promoted as a solution to developing world water, energy, health, and food problems; and various enhancement technologies are promoted for their


ability to eliminate disabilities. But all such claims are in an important sense necessarily false. Science per se cannot achieve any particular outcome; science works within a broader set of social, cultural, political, and economic conditions in contributing to solutions. And sometimes the claims are largely bogus. For instance, a recent report offered nanotechnology as a solution to important problems suffered by poor people in poor nations. Most problems therein discussed such as poor quality water obviously could be ameliorated using existing technologies -if there were the political will and the global funding. In proposing big new expenditures on scientific research, science implicitly is being offered as a way of delaying expenditures or otherwise not dealing with the real political barriers. That is not to deny that nanopore filters might eventually be helpful to some people whose water is polluted, but deeper wells and more reliable pumps would be a more sensible, quicker approach. Another form of dishonesty is that claims made about the benefits of new areas of science almost always include an assumption that the benefits will be distributed fairly. And the glowing claims rarely say much about the actual contexts in which the new science will be deployed and disseminated. Yet equity implications can only be assessed in light of a broader context, as in the November 2004 California ballot initiativeProposition 71to allocate $3 billion of state bond funds in support of embryonic stem cell research. A coalition of venture capitalists, scientists, entrepreneurs, and disease advocates spent $24 million to promote this voter initiative, boldly predicting that if the initiative were successful, the resulting research would rapidly lead to cures of conditions and diseases ranging from spinal chord injuries and Alzheimers to type 2 diabetes and Parkinsons. Had the promoters of Proposition 71 also been concerned about communicating to voters useful information about equity, their predictions might have looked something like this: While a small number of significant therapies might result over the next 10-20 years from embryonic stem cell research funded by the initiative, cures for most of the specific diseases


potentially subject to stem cell therapies will remain elusive due to unanticipated technical complexities. For example, understanding and controlling cell development and differentiation processes probably will remain rudimentary during this period. Scientists, however, will continue to make progress and will need continued public investments to stay on course. Meanwhile, continued inflation in the price of health care will disenfranchise increasing numbers of Californians from the health care system that would deliver the new therapies, while significant income from patenting the advances funded by the initiative may accrue to institutions and persons conducting the R&D. Of course, if voters had considered Proposition 71 in these terms, they might well have rejected it -- and that is precisely the point. Public expenditures for technoscientific research ought to be discussed not as an autonomous and inexorable path toward desired outcomes, equitably distributed, but as one element in a complex web of causes and effects, sometimes including greater inequity, sometimes lesser. If science were promoted and discussed honestly, science policy decisions might be made more cautiously; certainly they would be made with greater awareness of the complexity of their implications. And those who care about inequity might have a better chance of raising questions about scientific trajectories that help maintain inequity.

This chapters argument has been simple, and an organization as mainstream as the U.S. National Science Board has said much the same thing:
To ensure the most effective use of Federal discretionary funding it is essential that agreement be reached on which fields and which investment strategies hold the greatest promise for new knowledge that will contribute most effectively to better health, greater equity and social justice, improved living standards, a sustainable environment, a secure national defense, and to extending our understanding of nature. It is intrinsic to research that particular outcomes cannot be foretold; but it is possible, indeed necessary, to


make informed choices and to (



This chapter asks no more than that. Actually acting in accord with such fine words would turn out to be more controversial than members of the National Science Board have realized, of course. Despite an overall decline in U.S. death rates, poor and poorly educated people still die at higher rates than those with higher incomes or better educations, and this disparity actually increased between 1960 and 1986 and possibly over the past five years during the Great Recession. This was during a period of unprecedented public investment in biomedical research. According to my values, such outcomes are not just unfortunate but immoral. I believe the pattern is sufficiently clear, pervasive, and long-lasting that participants in science policy and in scientific inquiry cannot credibly pretend not to know. Their willing, even if tacit, participation in perpetuating the situation is strongly in tension with the professional ethics that scientists, engineers, and their professional associations often claim to uphold. Most scientists and engineers no doubt are well-intentioned individuals. But good intentions are not the same thing as deep reflection regarding the actual effects -- and lack of effects -- that technoscience has in promoting basic fairness. Right now, hardly any technoscientists are actively opposing research directions that do not work for social justice. In contrast, a very large number are behaving in ways that serve all too well the privileged positions of science and business, and that promote the normal workings of a consumer culture dominated by those in the upper third of humanity in income and wealth. In closing, it is difficult to deny that new technoscientific capacities introduced into an inegalitarian world tend disproportionately to benefit the affluent and powerful. In other words, it is difficult to help the poor by doing scientific research or by innovating technologically. As explained above, however, partial exceptions to the rule include: Public goods paid out of tax dollars. Innovations aimed specifically at have-nots.


Innovations that make the world less dangerous for the non-affluent (such as clean-up of polluting facilities). Innovations that make important goods more affordable to the poor, but that do not lead to over-production of trivial or unneeded items. (This may be nearly a null set.)

An obvious, albeit uncomfortable implication: Those who seek to make the world fairer might need to oppose some or much technoscientific research and innovation just to get the attention of those now working on their own projects without much awareness of who is being helped and harmed. Without some such attention and reconsideration, it is difficult to see how researchers and their organizations would even consider taking explicit, nontrivial steps to reorient their work to assist have-nots in the ways summarized above.


Chapter 14. No Innovation Without Representation?: Human Enhancement

In the bestselling book Radical Evolution, Washington Post editor/reporter Joel Garreau proposes that the next few decades are likely to be a turning point in history. From soldiers exoskeletons to bionic eyes, from chips in the brain to dramatically improved learning and recall, a few thousand scientists and engineers are innovating in ways that could end up driving the next stage of human evolution. My intention in this chapter is to very briefly describe some of the arenas of innovation, and to raise for your consideration the possibility that epochal changes in the conditions of human life ought not be allowed unless endorsed by a majority of humanity after extended inquiry and debate.

Overview of Enhancement
The term enhancement can mean so many different things that any schema could be challenged, but in this chapter I will divide the enhancement universe into five categories:
(1) Getting rid of inheritable diseases and defects such as cystic fibrosis considered to be so devastating that their elimination would be desired by a huge majority of humanity from every corner of the planet. (2) Assisting those who are handicapped to live a more functional life examples would range from brittle bones in elderly women, to damaged body parts, to dementia, to the assistive technologies associated with physicist Stephen Hawking. (3) Improving ordinary peoples performance in relatively linear, modest, and predictable ways better memory and better health, for example.


(4) Exacerbating the problem of winners versus losers by providing some people with a marked advantage over others. (5) Changing what it means to be human.

You will not be surprised that I am primarily worried about categories four and five. I should point out, however, that those who oppose abortion might not agree even with category one, and there can be some nontrivial controversies associated with categories two and three as well. In the intermediate to long term, for example, medical costs may be reduced by various interventions and enhancements, but most of the proposed innovations have up-front costs, sometimes substantial, and each such expenditure presumably will come at the expense of something else -- such as ordinary health care for those who do not yet have access to 21st-century medicine. Category one Genetic Screening Some of the worst inherited diseases already are declining, because more people are using genetic testing to decide whether to have children (and whether to abort fetuses carrying the disease). Good statistics are kept on only a handful of the hundreds or thousands of inheritable diseases, but it is clear that an increasing fraction of women are being tested as part of routine prenatal care. A California study found that the number of babies born with the severest form of cystic fibrosis had been reduced by 50 percent because many parents decided to end pregnancies when screening revealed the disease in the baby. Additional disorders gradually are being added to the battery of genetic tests. Even couples with no family history of problems are asking to be tested, because it is of course possible for random mutations to create new carriers. Other couples are screening embryos and using only those without problem genes. As an increasing number of companies compete and as new testing techniques are developed, costs are declining -- there is now


a $349 saliva test that screens simultaneously for over a hundred inheritable disorders. Another barrier to genetic testing was removed in 2008 when a U.S. law prohibited discrimination by insurers and employers on the basis of genetic tests; this eased many peoples fears that information would be used against them or their children. An example of the impacts these changes are having is the case of a Canadian couple, Jeff and Megan Carroll, who screened embryos prior to having their two children who are free of Jeffs inherited and ultimately terrifying disease, Huntington's chorea. "I felt very strongly that I didn't want to pass on this," he said. "(Huntington's) is done killing people in my family when I am gone." One of my own best friends died of the disease while in his 50s, and his family and friends suffered along with him. If it were up to me, Id definitely endorse doing just about anything to avoid others having that fate. Some diseases such as cystic fibrosis, Tay-Sachs, and spinal muscle atrophy occur only when a person inherits a recessive gene from each parent. The genes can be present with no damaging effects for generations until two carriers mate; their children then have a 1-in-4 chance of getting the disease. Statistics for inherited diseases are not carefully collected, but relevant experts are just about unanimous in pointing to significant reductions. For example, only about a dozen new cases of Tay-Sachs occurred each year in the United States, a decline of about 90 percent. One Jewish rabbi who had lost four children to the disease founded an organization in Brooklyn dedicated to making sure that fellow Ashkenazi Jews (from eastern and central Europe) are tested. Using a confidential PIN number, one can call a hotline to see if a prospective mate would be a risky match. The group has 300,000 members and tests for nine diseases, including cystic fibrosis. Within their community not just in New York City but also in Israel and wherever Jewish people have migrated that single


organization has helped virtually wipe out new incidences of the diseases they screen for. Another success is familial dysautonomia, a disease that causes faulty nerve development, floppy muscles, and other problems killing many victims in their 20s. Because of genetic screening, only a few children now are born with dysautonomia annually, and it will be entirely wiped out if the trajectory continues. Likewise diminishing is cystic fibrosis, a disease that causes sticky mucus buildup in the lungs, digestive problems, and often death in young adulthood. More than 10 million persons in the U.S., mostly whites, carry a gene mutation for the disease. In 2001, the American College of Obstetricians and Gynecologists recommended that pregnant women be offered testing, and within a few years the incidence of cystic fibrosis began dropping rapidly. Gene testing hasn't led to declines in all diseases, of course. Many still are not known and/or are not tested. But even some well known inheritable diseases remain under diagnosed. One of these is sickle cell, a blood disorder that causes anemia and pain and raises the risk of stroke; that it mostly afflicts African-Americans arguably is one reason for the slow progress.

Category two Assistive Technologies

The Roomba vacuum from iRobot for the most part is a somewhat annoying and mildly helpful contribution to house cleaning for the middle class. But for some people it is an important assistive technology. A disabled young woman wrote a letter of thanks to the company, which read in part:
Before my mom bought this device for me, I could not independently (nor effectively) clean my room. Roomba Red does not stir up any dust at all, unlike other vacuums I have used. It can get under my bed and table much easier than I can. My life has changed immensely as a result. My allergies are better controlled


and I am proud to be able to use Roomba Red independently whenever I need to. --- Sara M., Massachusetts

The Roomba is of course pretty low tech compared with the outer limits of emerging and imagined assistive technologies. But the list of such helps is growing, and each item is welcomed by those who really need it. Included are special lifts to help caregivers move MS sufferers and others who lack the musculature, coordination, or other capacities to move themselves in and out of bed, to the toilet, and elsewhere. Screen readers for the blind, cochlear implants for the deaf, pacemakers for weak hearts, catheters for kidney disease, and a long list of other items might fit into the spectrum of what might be considered as relatively normal assistive technologies. A device as common as the wheelchair has come a long way since the old hand-propelled versions of a half century ago. My father-in-law who was paralyzed in Intensive Care for three months with Guillaume-Barre syndrome, and despite valiant recovery remains affected, uses a $30,000 wheel chair operating on lithium-ion batteries to go just about anywhere, including ocean cruises to Alaska. He also has a $50,000 specially adapted van with an electric ramp. Along with an indomitable spirit and an incredible wife, these technologies allow him to work on his acre garden railway, to make lengthy car trips to visit grandchildren, friends, and physicians; and otherwise to enjoy life more than most people who have full use of their limbs. A step up are the somewhat more sophisticated technologies such as a voice synthesizer that physicist Stephen Hawking uses to continue to communicate, move, and work despite a motor neuron disease similar to Lou Gehrigs amyotrophic lateral sclerosis (ALS). Though now almost completely paralyzed, the adaptive mechanisms allow Hawking to work as Director of Research at the Centre for Theoretical Cosmology at the University of Cambridge. From that post in England he somehow manages simultaneously to serve as Distinguished Research Chair at the Perimeter Institute for Theoretical Physics in Ontario, Canada across an ocean.


Approximately 200,000 people worldwide suffer from ALS, and over 200 million have some kind of debilitating illness or problem that could benefit from assistive technologies. If the lifetime cost of those technologies were $10 million per person for the most severe cases like Hawkings, $1 million for intermediate cases like my father in law, and $100,000 for less demanding cases, the total expense might be on the order of a trillion dollars. That sounds like more than it really is given the worlds current income and wealth, but still its a lot which is why something like 90 percent of those afflicted are not getting their needs met very well. It would be comforting to assume that further R&D will correct the problem. But it actually was R&D that brought us the wonderful devices that are so costly, and further improvements are not likely to be cheaper. There are selected exceptions, such as the corneal implants that organizations in Nepal and India have learned to manufacture and suture into poor peoples eyes for a tiny fraction of what is expended in the rich countries. The exceptions probably can be increased if enough creativity and determination are devoted. However, my guess is that better assistive technologies made available to more people needing assistance will tend to lead to greater expenditures. Because these expenditures almost always come at the expense of something else, it would make sense to deliberate about the pace and direction of research and development of assistive technologies, and would make sense to choose budget limits deliberately rather than proceeding somnambulistically. I believe that technoscientific capacities should be directed disproportionately toward those who most need assistance, so I personally would endorse more help for more people in need. And I would be willing to give up something to get it. However, I would want to guard against blank checks, so that limited resources can go to the highest priority needs rather than to the niftiest forefronts for innovation. A robotic caregiver for every person needing assistance is not my idea of a sensible outcome.


Category three Modest Enhancements

Caffeine is of course one of the most common ways of enhancing attention and other aspects of completing mental tasks successfully. Everyone who drinks pasteurized milk in the U.S. gets a dose of vitamin D because the law requires adding it. Gatorade replenishes nutrients lost during intense exercise, so whether to call it an enhancement or just a food is debatable. A step or two up from those everyday enhancements, athletes train at high altitudes and otherwise follow diets and regimens to produce bodies and performances that would have been either impossible or a lot rarer just a few decades ago. Emulating the athletes in a lazier or more balanced way are the great many middle-class and upper-middle-class people who go to the gym, cut down on fatty foods, take nutritional supplements, have regular dental care, and otherwise prolong their years of relatively high-quality life. Herbs, vitamins, nutraceuticals, and other compounds believed effective for health have become at least a $500 billion-dollar industry worldwide. Omega-3 fatty acids from fish and flax seed are said to improve memory and otherwise contribute to health. Some of the B vitamins are associated with calming restless, electronically jangled minds. A number of different compounds from chamomile to melatonin are alleged to help with sleep. Immune system resistance to colds and other illnesses is associated in many minds and some studies with vitamin C, Echinacea, golden seal, and other supplements. Some women swear by evening primrose oil as an aid against menstrual symptoms. The list of pills and powders and teas and lotions believed to have healing or preventive health powers is long and getting longer, as a trip down the aisle at any supermarket or drugstore will reveal and as health food stores illustrate even more pointedly. What might be the next logical steps building upon these beginnings, which no doubt contain some substance and a good bit of gullibility? I suspect that we are in the early stages of science-enabled supplementation. As biologists have become increasingly able to deconstruct amino acids, for example, I


have been surprised at the number of plausible enhancement trajectories that are appearing with some regularity. Among psycho-pharmaceuticals, for example, cholinergic function enhancers are expected to improve learning, memory, attention, and psychomotor performance. Drugs affecting the adrenal gland such as Guanfacine are being explored as ways of improving working spatial memory, and the Ampakines and other AMPA receptor modulators appear to improve long-term memory.

Category four -- Winners and Losers

If physical and cognitive enhancements a good deal more extreme than those discussed under category 3 become feasible, the small minority of early adopters quite possibly could use their newfound abilities to out-compete others. In addition to magnifying present disparities in academic success, career advancement, income/wealth, and competition for "attractive" mates, extreme enhancements also could translate into military might, national and personal economic wealth, and new social class conflicts between Enhanced and Regulars. There obviously is a fine line between this category and the preceding one, because even mild enhancement capacities tend to flow toward those already advantaged in money, education, militarization, or other attributes conducive to understanding, purchasing, and utilizing the new potentials. Whereas a 50% improvement in one's capacity to memorize would keep the mildly enhanced individual within the present normal distribution, a tenfold increase in memory would allow the Enhanced to leave virtually everyone else in the dust. It is no accident therefore that the Defense Advance Projects Agency, Homeland Security, and other military-oriented organizations are providing a large chunk of the funding for both university and corporate R&D directly aiming at enhancement. The "soldier of the future" will run at top speed for extended periods, carry heavy pack and equipment with ease, and otherwise


have capacities enabling victory over any foe -- if those now in charge get their way, and if the technoscientists can actually live up to the hype. The most foreseeable enhancements actually are not biological, but mechanical and electronic -- such as exoskeletons enabling a normal soldier to carry lots more weight, jump higher, and so forth. Augmented cognition tools include surface electrodes that would allow individuals to remotely retrieve information, enhance vision and other sensory inputs, and perhaps simulate alternative decisions almost instantly -- as the best chess computers can do. "Transcranial magnetic stimulation" of the left prefrontal cortex is being explored as a means of improving analogic reasoning speed. Actual interventions in human brain and body appear likely to follow, because biologists and other scientists have made incredible progress in understanding how brains function at cellular and neurological levels. Peripheral nerve implants and cortical implants are being explored as sensory perception enhancers, with the latter potentially leading toward tele-operation of robotic and other systems remote from the person's location. Vagus nerve stimulation enhances recognition memory in preliminary laboratory research, and hippocampal implants potentially could become "neural prostheses" -- the equivalent of artificial limbs for the brain. For many of these and related matters, it is difficult even for the relevant experts to accurately distinguish between science fiction and fact, between pie-in-the-sky hopes and actual potentials. So I do not put much faith in any given gee-whiz "breakthrough." But it is clear that a whole lot of very smart people are being provided access to enormous sums of (mostly) taxpayers' money to develop and deploy esoteric equipment and techniques; a great deal of knowledge is accumulating in many different fields; and there is a convergence among nanoscience, biotechnology, cognitive science, and information processing. To bet against some of this turning into extreme


new capacities strikes me as quite foolish, though the timing of it is impossible to know.

Category five -- Transhumanism

"Transhumanism" has varying definitions, all aiming toward a future when human capacities are so transformed as to constitute a new stage of the species' evolution. Transhumanists differ among themselves in just how far this could or should go, but many believe that technology can ultimately rid homo sapiens of disease, and thereby can dramatically slow the aging process. There is no magic lifespan length at which one could say, "Ahah, now people have become transhuman." But if it became routine for people to live 150 years or more, I'd have to acknowledge that something pretty significant had changed. The more radical transhumanists do not want to settle just for a much longer life; they want to make death itself obsolete: "Why be human when you can be something better?," is in effect their rallying cry. The leading proponent of this, Ray Kurzweil (inventor of speech recognition software among other accomplishments), envisions a not-so-distant future he terms The Singularity. By this time artificial intelligence will have exceeded human intelligence (see graphs from Hans Moravec the roboticist), and some combination of genetic engineering, nanotechnology, and computer technology will enable mind, body, and machine to become one. Being biologically human will become obsolete, Kurzweil believes: With cyborg features and enhanced cognitive capacities, people alive at that time will have fewer deficiencies and more capabilities; they will possess the ability to become more like machines, and will be better for it. Kurzweil presently takes over a hundred pills and supplements daily to improve bodily functioning, drinks ionized water to prevent buildup of free radicals in the body (because they accelerate cellular destruction), and does some obviously sensible things such as keeping caloric intake low (shown to improve lifespan in rats) and exercising vigorously. He expects that


technoscientific understandings and interventions will be rapid enough that even a man in his sixties (i.e., Kurzweil) may well live long enough to benefit from the life extension techniques. Over the course of his greatly extended lifetime, the Singularity will occur, eventually allowing him to "download" his mind into....I'm not sure what -- computer, robot, or something. Kurzweil has a backup plan in case he dies before this comes about: He has paid to have his body cryogenically frozen by the same company that is preserving the body of baseball legend Ted Williams. Kurzweil figures that biomedical scientists within decades will develop the techniques to unthaw, reanimate, and cure the frozen ones of whatever illness caused them to die. And then he can go back to plan one. Kurzweil and other radical transhumanists may or may not be crazy in the sense of being out of touch with the everyday problems and possibilities of life as we know it; but he and his ilk are not ignorant, unintelligent, thoughtless, or uncaring. As he wrote in an article in Futurist Magazine, he foresees
a world where there is no distinction between the biological and the mechanical, or between physical and virtual reality. These technological revolutions will allow us to transcend our frail bodies with all their limitations. Illness, as we know it, will be eradicated. Through the use of nanotechnology, we will be able to manufacture almost any physical product upon demand, world hunger and poverty will be solved, and pollution will vanish. Human existence will undergo a quantum leap in evolution.

I have a lot of faith in technoscientists, but I suspect Kurzweil of wishful thinking in his projections. I also suspect that the transhumanists are putting their faith in technoscience partly because they do not want to face up to the extraordinarily difficult task of improving politics, economics, psychology, and culture. Whereas I still idealistically believe that humans can gradually develop a civilization that one could be proud to live in, the transhumanists arguably have given up on social change. But whatever my criticisms of them, I must say that Kurzweil and other transhumanists do not seem evil or


mean-spirited; their ideas may or may not strike one as implausible and/or unwise, but they do offer an alternative vision of how widespread suffering theoretically could be eased. And I have to admit that my ideas for improving the world are not exactly being embraced by the masses. The two facets of transhumanism obviously can be separated, because it would be conceivable to greatly extend lifespan without going so far as a thoroughgoing melding of human mind (and personality?) with machine. Radical life extension does not inherently bother me -- providing it can be done in ways compatible with other worthwhile goals such as limiting overall human population size, working within energy and environmental constraints, and ensuring that young people's opportunities are not blocked by aging superiors who live on and on. In contrast, the cyborg vision of downloadable minds strikes terror into me, but I grant that others logically could see it more positively.

If one evaluates the endeavors going on under various categories of human enhancement, it becomes apparent that a great many of the requirements for intelligent governance of technological innovation are violated. Liberty for technoscientists and for the individuals who might benefit clearly are getting precedence over goals that are publicly debated and chosen. And the speed at which enhancements are proceeding clearly violates the requirements for intelligent trial-and-error learning from experience. Ironically, efforts in the least problematic categories 1-2 have garnered most attention: Legions of bioethicists and others have created hospital and university research protocols for human subjects, legislatures have debated and passed protections against misuse of genetic testing, and myriad media stories have taught the world about fertility clinics, embryo screening, and the like.


Yet categories 4 and 5, which pretty clearly harbor the greater risks, have garnered less attention. That is partly because of military and anti-terrorist secrecy excuses. Funded most directly by DARPA and by a handful of businesses, the radical enhancement endeavor is indirectly supported by much of mainstream AI/Nano/Neuro/Bioscience. A great many scientific and engineering researchers who were just going about their business have made contributions without which current controversies over enhancement would not be occurring. For example, bionic eyes and other implants could not have come about without precursor research aimed at understanding immune systems (to minimize rejection), miniaturization, neural pathways, image processing, and many other fundamentals. In that sense of indirect and unwitting support, thousands or tens of thousands of researchers have opened the way for the contemporary efforts focused more specifically on enhancement. They have used funding from a billion or so mostly unsuspecting taxpayers in the U.S., Europe, and Japan, but increasingly also from China and India. New inquiries, techniques, tools, and organizations in genetics, robotics, information, and nanotechnologies are turning notions into experiments, experiments into pilot projects, and pilot projects into commercializable and militaristic enhancements. According to those who know most about what is going on, it seems likelier than not that the barriers are gradually giving way, and that it will become possible to alter minds, memories, metabolisms, personalities, babies and, if you believe in such things, perhaps ones very soul. No taxation without representation was the colonists rallying cry at the outset of the American Revolution. Every U.S. child is forced to read that more than once throughout grade school, and opposition to taxation obviously is alive and well and not only among current Tea Party activists. Yet the deeper point appears to have been lost, for in technological decision making one can detect barely a whimper of the democratic ideals


for which so many fought valiantly and spoke about passionately in years gone by. Inasmuch as technosocial change in our era is as powerful an influence on everyday life as government ever was, it seems only a tiny step to a 21st-century equivalent of the 18th-century demand: No innovation without representation. From observing how the enhancement issue and other epochal changes are (not) being handled, however, one would have to suppose that most people have never really grasped the founding principles of the U.S. Declaration of Independence or the Preamble to the Constitution. In closing, although it is a modern heresy to ask it, I have to wonder whether the scientists, computer professionals, and others facilitating the enhancement movement (and other epochal innovations) might be roughly equivalent to the armies and navies who protected the privileges of kings and nobles of yesteryear? Or perhaps the technoscientists are even part of the nobility? If the American Revolution were occurring today, would forefront technoscientists be the allies or the adversaries of the common people?




of the great curiosities of our era is that military R&D is so uncontroversial. At least in the U.S., there is more brouhaha about the Kardashian family than about 21st-century warfare and its enabling weaponry, surveillance, communication, and associated technologies. This is true despite the fact that military R&D obviously poses grave dangers to anyone targeted, dangers surer and more severe than those posed even by most of the hazardous civilian innovation. Moreover, military innovation often drives civilian innovation, as when the Internet evolved from a project catalyzed by the Defense Advanced Research Projects Agency (DARPA). Social thinkers interested in improving unwise civilian technologies therefore might get a head start by paying attention to advanced military research. But hardly any do so. Journalists likewise expend perhaps a hundred thousand times more ink on bombings and other current conflicts than they do writing about preparations for future wars. In a way, this is entirely understandable given prevailing definitions of what constitutes "news." On reflection, however, it is clear there is almost nothing that any contemporary readers/viewers can do about present wars, whereas every human can potentially learn something that might help prevent the worst of what could be on the way. There used to be a saying at the start of a war that "The boys will be home by Christmas" -- victorious of course -- and this never turns out to be true. Wars last longer and the costs are greater than originally anticipated; conversely, the expected gains from war almost always are less than


initially assumed. Better understanding the war-making capacities now under development might goad rethinking as to whether the next conflict is likely to work out any better than previous ones. Military innovation is so important that a better balance between civilian and military technoscience probably would be warranted throughout the book, and in the next edition I might make that change. For now, however, I merely want to introduce the topic of military R&D as an illustration of the sorts of mental shifts that might be needed to think more clearly about the future of civilization. Thinking and Not Thinking The fact that the world thus far has avoided nuclear bombings since Hiroshima and Nagasaki is due to technical and political safeguards, as well as to restraint by U.S. presidents, Russian dictators, and others who usually are called political "leaders" (even though it may be unclear who is following, or where the so-called leaders are heading). But the lack of nuclear warfare also is due to dumb luck, as one can see from reading about more than a few near-catastrophes that occurred during the U.S.-Soviet Cold War from 1950-1990. Do you want the future of humanity to depend on luck? I don't. To move from luck to strategy in global military and other technoscientific governance would require many changes, some discussed in previous chapters and others going well beyond the scope of this book. A prerequisite for any significant change -- both for mundane civilian technologies and for esoteric military technologies -- would be greater clarity and accuracy in myriad peoples thinking, or lack of thinking. The fundamental insight of the social sciences is that each persons perceptions, thoughts, and behaviors are to a substantial degree shaped by the social contexts in which each lives. This is illustrated in most chapters of the book. When one starts asking a few simple questions, it quickly


becomes clear that prevailing ways of developing and deploying technoscience often are at least a bit questionable, if not downright bizarre. Yet in daily life it seems quite natural to accept more than question existing economic and political arrangements that govern technoscience. It would not be credible to believe that a few billion people have accidentally come to share strange ideas -- such as the belief that chemicals must be toxic, or that business executives who pollute should earn handsome bonuses, or that elected representatives do not have to demonstrate solid qualifications. No, such ideas do not simply arise in billions of minds, nor is anyone born with such beliefs; so the ideas must be learned somewhere. Likewise learned is the tendency not to question technological "progress." Disagreement is so common among siblings, roommates, coworkers, and others -- sometimes vociferous disagreement over relatively trivial matters -- that it is all the more strange that so many acquiesce so easily to so much in the technological realm. It is no more possible to track the origins of non-thinking than it is to pin down the origins of dubious ideas -- in part because the shaping began so long ago, in part because the phenomena are so complex. As a general approximation, however, there really is only one plausible explanation: Media, schooling, parenting, peers, politicians, business spokespersons, and other social forces somehow must be "teaching" young people to mostly accept the status quo. Otherwise they would at least be disagreeing with each other. Given the lack of dispute about the relatively obvious, important, and inherently controversial issues discussed in this book, one must infer that many minds somehow are being aligned in the way a magnet aligns iron filings. Some people are merely dispirited, having become persuaded that no significant improvement is possible. But a great many truly believe in that there is nothing to dispute, that the basic outlines of politics, economics, and technological governance are self evidently valid and


beyond challenge. What a remarkable feat! No deliberate conspiracy could maintain such subtle cognitive formatting decade after decade, for it would become obvious, generating organized opposition. That there is essentially no opposition makes the gentle, invisible tyranny all the more remarkable. The widely shared impairments in thinking about technological civilization must have evolved over many generations, and each new generation now grows up believing what they find around them to be natural. But neither the great accomplishments nor the horrific problems are natural, they are created and maintained by peoples thoughts and actions. What does this all have to do with military innovation?

Preparing to Think about Military Innovation Who has motivation to probe deeply and think clearly about war-oriented technologies? In the consumer choices reviewed in chapter 3, any harms would be visited upon the consumer's family, and any extra dollars to reduce toxics would come out of the consumer's pocket. So there would seem to be an incentive to inquire, think, and weigh. Despite such incentive, the level of knowledge and the pattern of observed risks suggested that thinking tends to be of a pretty low order. For military R&D and resulting weaponry, hardly anyone has a direct monetary incentive to look into the matter. Yes, ones taxes will be spent; but few taxpayers know much about where their taxes go anyway, and even taking the time to write a brilliant letter about weaponry to a member of Congress would be a drop in the bucket compared with all the communications a legislator receives. So "rational ignorance" makes a lot of sense in dealing with distant threats over which one has no control. Executives at Boeing, Rockwell Marietta, and other defense contractors obviously do have an incentive for careful planning, but theirs is


asymmetric: They get contracts only when the decision is to proceed with research, development, and eventually manufacturing. University engineering, computer science, and cognitive science faculty likewise benefit only from the "yes," not from the "no." The Joint Chiefs of Staff and their subordinate military planners realize they cannot ask for everything or they will lose credibility with Congress and with civilian budgeters in the Department of Defense; within that constraint, however, few high-ranking, active-duty military officers have ever been heard to complain about too much spending on weaponry. The insiders thus have a strong incentive to proceed with military innovation, and outsiders have little incentive to think about it at all. That is not exactly a reassuring formula for those of us seeking to counteract somnambulism and momentum. Partly as a result, expenditures for weaponry, military communications and transport, and R&D leading to future weapons systems are among the least controversial in U.S. congressional deliberations, and (as far as an outsider can tell) the same appears usually to be true in China, Israel, Germany, France, Russia, and other countries with military aspirations. Peace rallies are not exactly breaking out on every street corner in any of those nations. Some would-be dissenters no doubt remain in the background because they despair of making a difference. But the quietude is so widespread and so profound that I have to join Sherlock Holmes in asking, Why didnt the dog bark? His answer was that the dog was friendly with the criminal. My suspicion is that a great many people must be thinking in ways that are friendly to high levels of military innovation. I do not say that they are all wrong; the issues involved are far too complex for anyone to know. But people disagree with each other over football teams, clothing styles, political candidates, abortion, legalization of marijuana, and a host of other matters; why don't half or so of them disagree with the other half when it comes to weaponry R&D? That there is so little active disagreement suggests that most people must believe that weaponry


innovation is a good idea. If so, the belief must stem from a sense that innovation is a means to a desirable end, and so it is fair to ask whether the ends have been well served. Some history may help with that assessment. We know that more than one hundred million people died as a direct result of military conflicts during the 20th century roughly 19,000 deaths per week. To put that abstract notion into a human scale, one needs something to compare it with: What else adds up to 19,000 dead? About 2750 perished in the 9/11 attack on the World Trade Center, and about the same number of U.S. Navy and Army personnel died in the 1941 air raid on Pearl Harbor; add in all the U.S. military personnel and civilian contractors killed in Iraq and Afghanistan over the last decade (~8000), and you still have room to include all the people who die in automobile and other accidents, homicides, and suicides in a typical month in the U.S. It is impossible for me to truly comprehend the meaning of a World Trade Center or a Pearl Harbor every day for a hundred years, and I assume the same is true for you. If so, then there is a sense in which you and I do not really understand weaponry and war. This chapter of course cannot offer THE TRUTH on such a complex subject, but it does offer a number of challenges to conventional wisdom -- challenges that may at least unsettle pre-existing mentalities concerning weaponry utilization and innovation. To put the basic conclusion up front: The case in favor of weaponry innovation is not nearly as strong as is ordinarily believed. Surveying the evidence and considering the logic, open-minded readers may find yourselves pondering whether the world would be better off if weaponry innovation were to be substantially slowed and redirected. The Story People Tell Themselves A first step in this inquiry is to get clearer about what is sought from innovative weapons. On first inspection, that probably seems too obvious to even bother thinking about. The Defense Advanced Research Projects Agency (DARPA), Pentagon planners, and their equivalents in Russia,


China, and elsewhere pay scientists, engineers, and businesses to develop powerful capacities in order to deter other nations' leaders from initiating violence. New military capacities of course are not limited to weapons per se: spy satellites, communications systems, armored transport vehicles, and lots of other ancillary technologies are essential components of an overall "defense." It is hard to quarrel with the basic goal -- who could be opposed to preventing violence?! If deterrence does not work and others do initiate violence, a backup goal of innovation obviously is to help repel an enemy and perhaps defeat them. Britain and the U.S. did so with the development of radar and other innovations during World War II. If there is going to be violent conflict, who would not prefer to win it rather than losing it?! Another aim of weaponry innovation is more controversial and less directly aimed at defense: Some members of Congress vote for weaponry expenditures partly to create manufacturing and other jobs in their districts and states. Politicians like to be thought of as effective at "bringing home the bacon" for their constituents, and re-election is easier when voters perceive that elected officials are helping create jobs in their local economy. Largely for that reason, components of the new F-35 Strike-Force Fighter jet are now being manufactured in 48 out of the 50 states! The employment goal can be achieved in other ways, such as building bridges, subsidizing state universities, and investing in the greening of government facilities; so it is not in the same league as innovating to deter warfare and to gain an advantage if fighting does break out. I expect that most people focus on the first two goals to the extent that they carry around in their minds any story to justify R&D oriented toward military purposes. Shortcomings of the Six Objectives Deterring violence and defeating aggressors both seem like unchallengeable goals. Regrettably, the world is a good deal more complex than the simple


deter-or-win story suggests. For starters, more than one country may be innovating -- what is sometimes called an arms race. If two or more potential enemies innovate, neither may end up better off than they were before. In fact, both may be worse off in the sense of having spent money for weaponry instead of for education, health, or other social benefits. Second, all parties may be worse off in the sense of having made the world a more dangerous place. If there had not been a nuclear arms race between the U.S. and the Soviet Union in the 20th century, nuclear proliferation would be less of a concern in the 21st century: India, Israel, China, Pakistan, and other countries have learned from watching/ spying/reverse engineering what went on in the most "advanced" countries, and these secondary nuclear powers now possess a total of more than 500 nuclear weapons. Iran and North Korea are among several additional nations striving to build a bomb and associated missile delivery systems. The U.S. and Russia have de-escalated, yet as President Obama said in 2009, In a strange turn of history, the threat of global nuclear war has gone down, but the risk of a nuclear attack has gone up. More nations have acquired these weapons. Testing has continued. Black market trade in nuclear secrets and nuclear materials abound. The technology to build a bomb has spread. Terrorists are determined to buy, build, or steal one. (Prague speech spring 2009) A third shortcoming of the deter-or-win story is that the nature of warfare changes over time, so weapons developed for one era may not be appropriate in another. Aircraft carriers were a terrific way to fight maritime battles in the mid-20th century; but in the 21st century a giant ship sitting relatively still in the ocean is not much more than a wonderful target for long-distance missiles. The F-35 Jet Fighter may be outdated by the time it is finally delivered starting in 2016 at a total program cost of roughly $400 billion, fifteen years after initial conception.


Fourth, weaponry innovation offers confirmation of the point with which the book began -- unintended consequences. As Orville Wright the airplane pioneer said in the middle of World War I, "I really believe that the aeroplane will have a tendency to make war impossible....As a result of its activities, every opposing general knows precisely the strength of his enemy and precisely what he is going to do. Thus surprise attacks, which for thousands of years have determined the events of wars, are no longer possible." Wright was a smart enough guy, but he of course proved dead wrong on this prediction. Even technologies explicitly designed for the military get used for purposes different from those intended by the innovators. Thus, software designed for monitoring targets for U.S. unmanned aerial vehicles was captured by Afghan insurgents and used to see U.S. troop positions. Automatic weapons developed for warfare now are utilized for drive-by shootings in drug wars in U.S. inner cities and in the drug-supplying countries of Central America. Helicopter gunships, grenade launchers, landmines, and other weapons are put into the service of African dictators, ending up facilitating brutal civil wars. In 2011, the Syrian and Libyan governments turned on civilian protesters the weaponry originally given or sold to them for national defense. Fifth, although weaponry some day may be self-activating if Lethal Autonomous Robotics continues on its present trajectory, at least for the present someone has to make the decisions. It is rare to find a major decision about utilization of weaponry that cannot be cogently criticized, at least in hindsight, and there actually are a fair number of cases where going to war and/or the strategy of pursuing it were considered ill advised even from the outset by upper-echelon military officers, by many elected officials, and by others who would normally be presumed to share authority. Those who tend to favor weaponry innovation probably have not given enough respect to this simple fact.


For example, Hitler's military staffs were appalled at some of his orders, and the same has been true for other dictators. Napoleon marched his forces to freeze to death in the Russian winter long before Hitler made the same foolish move. Less known or remembered, but hardly a secret, is the fact that more than a few U.S. commanders and even Secretaries of Defense have disagreed with significant elements of the wars in Vietnam, Iraq, and Afghanistan. A smaller, more specific example occurred during the Vietnam War when the Viet Cong were slipping over the border into Cambodia to evade American troops. Secretary of State Henry Kissinger invented the notion of "anticipatory retaliation," a brilliant and outrageous linguistic ploy to justify the U.S. striking first, bombing and otherwise carrying the war into Cambodian territory. This was against international law inasmuch as the Cambodian government was officially neutral, but not even members of the Senate Armed Services Committee were allowed to vote on the issue; it was effectively decided by Kissinger, President Nixon, and a few officials in the White House and Defense Department. The same was true of the A-bomb attacks on Hiroshima and Nagasaki and the 2011 Libyan intervention authorized by President Obama. The bottom line: Literally a handful of men have typically been the ones to decide whether and how ferocious new weapons are utilized, and history has often judged their actions harshly. Sixth, the standard thinking about military R&D does not acknowledge how difficult it is to judge whether, when, and how to use which weaponry. Part of this is a matter of conflicting values: What seems justifiable to some may seem barbaric to others. And part of the problem is a matter of changing values: What previously seemed justifiable may come to be seen in retrospect as heinous. Few contemporary college students probably would endorse dropping fire bombs on noncombatants including women, children, elderly, and disabled people. Yet U.S. and British fire bombing killed 25,000 persons in Dresden, Germany in a single day during World War II, devastating an area of 15 square miles. Twice that number of civilians were


killed in a similar air raid on Hamburg, and 18,000 civilians died from fire bombing of Pforzheim. The actions allegedly were essential for destroying crucial industrial capacity, but credible historians argue that Dresden was a cultural landmark of little or no military significance, and the attacks were indiscriminate area bombing and not proportionate to the commensurate military gains. There were no single incidents of that magnitude during the Vietnam War, but napalm produced by Dow Chemical was dropped with regularity on the Viet Cong and North Vietnamese, including destruction of entire villages. Go to the war museum in Saigon (Ho Chi Minh City) someday if you want to see photos and evidence of the results. Opinions will differ, but I have never heard of an instance of firebombing that I would consider to have been justified. Hence, the persons involved in developing, manufacturing, and killing with that technology arguably should be considered criminals. Even if you completely disagree, which I assume many readers will, you might acknowledge that "deterring or defeating the enemy" via firebombing is a lot messier and more troubling than the simple story about weaponry innovation implies. A final difficulty with the ordinary rationale for military innovation is that technologies tend to exert an effect upon the possessor. There is nothing mystical or sinister about it, it is just a simple fact that humans are influenced by the potentials available in their task environment. If a box of chocolates is sitting on your table, are you more likely to eat one (or two, or three) than if the chocolates were out of sight -- or out of the house? If your friend comes over with a Frisbee, are you more likely to play than if you had neither Frisbee nor playmate? If a homeowner owns a chainsaw and has the skill to use it, is he more likely to trim trees than someone who lacks the tool and technique? Any reason to believe that the same simple encouragement syndrome does not apply to presidents and generals: Got weapons?


Availability of new weapons changes the way that military planners and decision makers think and act. They may be more confident in negotiations -- which could be effective if it brings the other side to a compromise solution, but could be undesirable if the boys with the new toys behave arrogantly and drive the other side away from the negotiating table. They may be more willing to order air strikes if the U.S. dominates the skies; this could be good if it brings a war to a speedy and just end; it could be undesirable if missed airstrikes kill civilians and turn their children into future rebels. The reality is of course a combination of those desired and undesired outcomes, and the stories people normally tell themselves about military weapons and war making rarely reflect the complexities.

Altogether, then, there are at least seven nontrivial complications deserving consideration by those subscribing to a "deter-or-win" rationale for military innovation:
1. The potential enemy's innovations may negate one's own. 2. Innovation may spread well beyond the initial combatants and purposes, making the world or sections of it a more dangerous place. Other national militaries or revolutionary groups may obtain "our" weapons, and use them against "us" or our "allies," or against innocents. 3. The nature of warfare changes, making some weaponry outdated, and requiring a treadmill of innovation. 4. As with other technologies, there are bound to be unintended consequences. Once weapons are developed, the originators lose control over the outcome, and their original intentions no longer matter. 5. "Who decides?" on utilization of weaponry rarely is the same as those who authorized its original development, and often is a very small group that consults with hardly anyone.


6. It is difficult to make wise decisions about when to utilize military capacities, for which purposes, against whom, when. Developing the weapons is a lot easier than figuring out how to use them well. 7. New military capabilities exert complicated effects on the people who use or otherwise make decisions about them, probably leading systematically to too frequent and excessive use of force.

Obviously there have been wars and other violence as far back as one can peer in human history, so it would be foolish to blame high-tech weapons for violence. However, it is equally undeniable that higher-tech weapons have enabled higher-tech wars, with civilians at greater risk more of the time and in larger numbers than ever before. In addition to combatants and civilians killed directly in 20th-century wars, roughly 500 million people were disabled, orphaned, lost a husband/son, suffered post-traumatic stress disorder, or died from wars' indirect outcomes including malnutrition, cholera, and other disease. My own father came home from the war in the Pacific against Japan with Post Traumatic Stress Disorder, died an accidental death related to alcoholism at age 37, and some of his children have never fully recovered more than half a century later. Wars effects ramify, and more powerful weaponry can ramify farther. Given the growth in world population, casualties could be even higher in the 21st century if major conflicts arise. Pakistan and India recently won the distinction of having "the world's most dangerous border," according to the news magazine The Economist. Despite the dangers on that border, despite the nightmarish Israeli- Palestinian never-ending story, despite ongoing fighting in Iraq, Afghanistan, and other nations, there is surprisingly little concern, discussion, or action among the populace of the affluent nations regarding how to reduce the frequency and severity of violent conflict. Nor is there much discussion of whether more and "better" weapons actually make the world safer, with the issue never contested in elections for the presidency or for other high governmental positions throughout the world.


To illustrate: An estimated three billion people watched part of the 2011 "Royal Wedding," whereas perhaps one percent of that number have seen or heard serious debate pertaining to the topics in this chapter. Flying for the most part beneath public radar are university collaboration in weaponry innovation, private armies organized by for-profit corporations, the horrible plight of child soldiers, and the huge black market where anyone with money and contacts can purchase surface-to-air missiles and other weaponry not legally obtainable. Essentially never discussed is the possibility of radically downsizing or eliminating the legal armaments industry that designs and sells military aircraft, automatic weapons, landmines, tanks, and other technologies of violence. Essentially never discussed is the ridiculous (in my eyes) level of U.S. air, naval, and troop deployments continuing in Korea, Japan, and elsewhere in the world, despite the fact that little to no real use has been made of these forces in the past half century. On all these matters, thoughtful and informed people can disagree. However, I hope that no one reading this book would doubt that it makes sense periodically to rethink ones assumptions, and the main goal of this chapter has been simply to induce you to do that. Whatever you end up believing about military innovation, you and the world will be wiser than if you join the herd of those who are not thinking much at all.


Chapter 16. Technology, Work, and Leisure2

In a 1930 essay, Economic Possibilities for Our Grandchildren, one of the

20th century's most famous economists, John Maynard Keynes, predicted that rising productivity due to technological change would result in a substantial increase in leisure during the ensuing 100 years. Using abundant leisure time in a meaningful way would become one of peoples central challenges, Keynes expected. Few 21st-century observers would be quite so naive -- or visionary -- but many of us still assume that technological innovation allows people to work fewer hours and have more leisure time. Is that a valid presumption? Do individuals actually have the capacity to decide whether they get the balance they prefer between work and leisure? What qualifies as "leisure" -- any hours not spent doing paid work or housework? Do iPhones, email, video games, and other communications innovations create an experience of life as "leisurely"? According to Boston University economics professor Juliet Schor, working hours in the U.S. have not obeyed any such "law of increased leisure." During the late 20th century, adults on average actually worked more than

2, This chapter was inspired by Juliet B. Schor, The Overworked American: The Unexpected
Decline of Leisure, New York, Basic Books, 1991. Most quotes are from her book, except where noted. Another important and more recent source was Valerie A. Ramey and Neville Francis, "A Century of Work and Leisure," National Bureau of Economic Research, Working Paper No. 12264, May 2006.


they did in the middle of the century, with the increase coming primarily in the years between 1973 and 1990. This coincided with the entry of large numbers of women into the paid labor force, which obviously increased their working hours inasmuch as cooking, cleaning, and childcare did not automatically go away. Free time during those two decades fell by about 40 percent, Schor calculated, from 26 hours per week down to about 16 for the average working adult after career and household duties are fulfilled. Schor includes time spent preparing meals, doing childcare, mowing lawns, and other necessary duties outside of the workplace, so part of the issue is definitional: What counts as "work"? Some of the lost leisure time was eaten up by increased time commuting to work, which could be construed as having a leisure component if, say, one listened to books on tape. Not many people would voluntarily engage in commuting, however, so it is probably fair to count it as a form of work. Calculations also depend on the exact time period being measured, on whether men or women are the focus, the age ranges taken into account, and the sets of statistics being used. For example, some studies include work in the armed forces, which of course goes up during times of war; some studies deduct for lost work time due to illness, which has declined considerably over the past century. Altogether, good economists using different data sets and assumptions can arrive at somewhat different calculations; given the complexities, however, what is most impressive is how much agreement there actually is. In a nutshell, persons born in 1900 enjoyed approximately two hours less leisure per week than did those born in 1980, and the mild trend appears to be continuing. As one sophisticated, balanced analysis concluded in 2006, "Prime age individuals between the ages of 25 and 54 are working the same number of hours now as in 1900 because the rise in female hours in this age group has offset the decline in male hours" (Ramey and Francis, 2006, p. 2). People aged 14-24 did enjoy a substantial decline in hours worked, either in factories or on the farm, but essentially all of that gain was eaten up by


increased time for schooling. Only among persons under 14 and over 55 has there been a consistent pattern of significant reductions in hours worked. Part of the story behind the feeble connections between technological innovation and leisure concerns the sexual division of labor. In a well-known book titled More Work for Mother, historian Ruth Schwarz Cowan showed that household appliances and other technosocial changes greatly reduced the physical difficulty of laundry and other cleaning, but women's own and their family members' expectations for cleanliness increased. Chauffeuring children also came to eat up a nontrivial fraction of many parents' otherwise "free" hours. The net result, Cowan argues, was more rather than less work for the primary household caregiver, usually a woman who also was increasingly likely to undertake a paid career outside the home. Men increased their house-related labor gradually over the course of the past century, but not enough to compensate for the demands on women's time. The sexual division of labor as of 2006 showed men working approximately 18 hours per week around the home, with women working approximately 32 hours. The discrepancy continues to narrow, very gradually, in two-parent households; the figures of course do not apply to the large number of households headed by single women with children, whose workload is hard to imagine for most people not in that category. Over the past several generations, the variety of leisure activities has of course increased dramatically. Television, Internet, video games, and smart phones are obvious, as is the shopping mall. Not so apparent are new sports such as snow boarding and roller blading; and one of the most common, time-consuming away-from-work activities of our era was relatively uncommon half a century ago: air travel for vacations. There now are as many books published each decade as the total number of books the world had ever known prior to about 1950. The overall effect is to produce a form of relative deprivation: Whether or not people actually are devoting more time to paid and unpaid "work," leisure time is steadily shrinking relative to the opportunities to "spend" it.


There also has been a speed-up in many workplaces. Whereas the mass production lines of the standard industrial factory long have been associated with a rapid pace of work, the techniques that efficiency experts brought to factories now have migrated into many other work domains. Call centers are notorious, with automated processes to switch often irate callers to the first available "service associate," so there is little or no unproductive recovery time between calls. Surveillance pressures workers not to waste time being excessively helpful. In other lines of work, keystrokes are counted, cameras watch employees, and companies cut workers and combine job functions to "run leaner" so as to compete more effectively with other businesses behaving likewise. Coupled with job insecurity, the combination of pace/stress, hours worked, and constant lure of "leisure" possibilities contributes to a sleep deficit. According to sleep researchers, a majority of Americans are getting 60-90 minutes less sleep per night than they are believed to need for optimum health and performance. The number of people showing up at sleep disorder clinics with serious problems has skyrocketed in the last decade. Shiftwork, long working hours, the growth of a global economy (with its...24-hour business culture), and the accelerating pace of life have all contributed. Schor found that many working mothers talk about sleep the way a hungry person talks about food, and 50 percent of them report suffering from high levels of stress, saying, I rarely have any time for myself. For both men and women, time available to be with their children has declined since 1960. Some estimates put the average decline per parent as high as 10-12 fewer hours per week, largely but not entirely because of working mothers and because of parents' job demands. As one 28-year-old father working 8-12 hours per week of overtime at a factory put it, The trouble is, the little time Im home Im too tired to have any fun with them or be any real help around the house.


What are the causes of the odd fact that technological wizardry has not led to much change in leisure? People who work for me should have phones in their bathrooms, one corporations chief executive reportedly said, illustrating the demands he chooses to place on subordinates. High-level administrators at my technological university are told they have to be available 24/7. But presumably theres more to the phenomenon than rotten bosses. Is the decline in leisure time perhaps due somehow to computerization, to increased global competition, to modern business practices, or to unknown forces that accompany the present stage of technological civilization? There is some evidence of a world-wide speedup, but the decline of leisure to date has been much worse in some nations than in others. Employees in the U.S. work many more hours than do their counterparts in most European countries. French and German factory workers, for example, work 320 fewer hours yearly than Americans. That is the equivalent of nearly two working months of additional leisure -- and, in fact, the typical European does take much more vacation time than the typical American. This is true in part because laws in European countries require up to seven weeks of paid vacation for most categories of workers; and trade union and other labor agreements go beyond what governments require.

Comparison with Pre-Industrial Work

How does the contemporary situation compare with work in previous eras? Peasant agricultural workers in 13th century England worked for wages or for a feudal lord, and our image is that they were horribly mistreated. In fact, although they worked from dawn to dusk during peak harvest times, they had the right to frequent breaks -- including a customary mid afternoon nap! And during the majority of the year, working hours were irregular. Altogether, adult males worked an estimated 1440-1620 hours per year, roughly equivalent to a modern worker with a 40-hour week with 12 or more weeks of vacation.


The pace of work was relatively slow, as well. The economy was dominated by agrarian rhythms, free of haste, careless of exactitude, unconcerned by productivity. The concepts of hours and minutes were not yet utilized, with clocks existing only in monasteries and palaces. A long, hard day of agricultural labor requires in excess of 3000 calories, and ordinary people simply did not have access to that level of food supply. The medieval calendar also was filled with holidays; church holidays included long vacations at Christmas, Easter, and midsummer, as well as numerous saints and rest days. These were spent both in sober churchgoing and in feasting, drinking, and merrymaking. In addition, there were weeks worth of ales to mark important life events (brides ales or wake ales) as well as less momentous occasions.... All told, holiday leisure time in medieval England took up probably about a third of the year; Spanish holidays amounted to about 5 months; and the French did even better in being guaranteed Sundays off together with 90 other rest days and 38 holidays 180 days in all! Research by anthropologists studying traditional cultures reveals a similar story. The Kapauka peoples of Papua New Guinea, for example, still do not work two days in a row: If they work one day, they take the next one off! The !Kung Bushmen work six hours per day, but only 2.5 days per week. Men of the Sandwich Islands (near Hawaii) work only four hours per day, as do Australian aborigines. All these primitive people were/are materially poor, of course; but they are richer than many technological workers in at least one respect: time.

In principle, each person always is free to re-choose how much time to spend working for money, how much to take as leisure, and many hours to devote to childcare, housework, yard, home maintenance, and volunteering. In practice, most people tend to behave in ways rather similar to those in


their workplace and in their subculture. Legacy thinking? Gentle tyranny? Sensible choice? Probably some combination of those and other forces. Remember, though, the exemplar offered by Netflix no limit on vacation time. To the extent you have a choice, would you want to open such a possibility for yourself and for others? Or do you choose to be a gentler, contemporary version of the taskmasters who worked long hours in the hot Egyptian sun so that they could keep the captive Israelites making bricks to fulfill the Pharaohs whim? There are many other issues that would deserve consideration in thinking about the overall contours of a life worth living, a satisfying life for self and others. The decline of community and the difficulties many adults sustaining friendships have something to do with living in distant suburbs, commuting, and the encouragements to stay home emanating from air conditioning, diverse screen-oriented entertainment, and other phenomena conducive to networked individualism rather than frequent interactions normally labeled community. But for most adults, quality and quantity of work constrains or liberates almost everything else. It is here that visions of a commendable civilization must begin.


Chapter 17. Envisioning a Wiser Technofuture


there be something of a parallel universe potentially existing alongside the present world, a much fairer and wiser, alternative technological civilization? That phrasing gives more of sci-fi spin than I really intend, but I do want to invite you to step outside the worldview you grew up in long enough to join in a final thought experiment. How would you envision an "alternative modernity," offering a better future for yourself, your grandchildren, and others? Grownups mostly do not understand the fascination young people have for online avatars, fantasy games, and futuristic films, but I wonder if such extraordinary phenomena may represent your intuition that something is missing in everyday life. High-volume music, slasher films, zombies, binge drinking, and pill parties may play a similar function. Those phenomena are far from my reality, but I can imagine that the sort of enthusiasm that goes into youth culture potentially could stimulate creative thinking about better science and technology, better use of existing technoscientific capabilities, and better political and economic institutions to govern the civilization. Regrettably, many college student minds are not much more imaginative than those of their elders. In listening to students in college classrooms, I have to confess that I sometimes think of the pinned down, dead butterflies exhibited in natural history museums. But what I think does not much matter: Have you begun to see such a "dead zone" in yourself and/or in


others? Somnambulism, legacy thinking, gentle tyranny, Lilliputian wires, rational ignorance, learned helplessness, apathy, the hidden curriculum of schooling, weak ethical reasoning, acting like a student instead of as an inquirer, and related terms all can be used to outline the stunting of inquiry. "Whoever has eyes, let them see," one version of the Christian bible says. As a kid, I thought that was stupid of course everyone (who isn't vision impaired) can see, what's the big deal? Now maybe I understand a little better: Accurate perception turns out to be more difficult than it first seems, because TV and other media, peers, parents, textbooks, career paths, and everyday life seduce into not seeing a lot of what is right in front of one's face. Just about every day this semester, the course has pointed to something pretty obvious that is not widely recognized or acted upon. A combination of subconscious training, peer pressure and other social norms, tangible and intangible incentives, lack of time and energy, and minimal curiosity combine to produce a huge fraction of people who may have passports but do not ordinarily behave as citizens. They instead tend to conform more as did the subjects of kings and nobles doing their jobs, performing duties as consumers and homeowners, paying taxes, engaging in prepackaged leisure, and not much more. Such behaviors may be quite sensible, if there are no good alternatives. But it is a vicious circle: Unlike butterflies and Gulliver, middle-class people in affluent countries are held down not by pins and threads but by the cognitive gravitational pull of civilization as they have known it. Quite possibly that will not change for most people; but I hope readers will choose at least to acknowledge the predicament and perhaps to struggle against it to achieve partial liberation. I readily admit that ideas are a weak tool with which to combat the forces that have molded a person's thinking and expectations by the time she is 18-21. Ideas also are weak compared with economic incentives, military might, and the technological juggernaut. Nevertheless, re-visioning is the best I have to offer.


Toward that end, this chapter starts with a perspective different from those I have used thus far as a final effort to convince you that it is worth reexamining what you have grown up perceiving and believing about science technology, and progress. I then say just a bit about an alternative technofuture that many people might prefer to the one into which billions of people now are heading.

Why Re-Vision?
Let me begin with what is perhaps the most outlandish thought that I have ever shared either in print or in a lecture: I am beginning to wonder whether many scientists, engineers, and everyday people suffer from a certain form of collective insanity. I have not put the matter so strongly up until now; rather I have spoken of legacy thinking, of minds influenced by social contexts, and of other relatively gentle ways of conveying the fundamental social science insight that people are largely products of their cultures. Contemporary psychologists no longer use the general term insane -- they speak of bipolar disorder, ADHD, and so forth. But The Merriam Webster Dictionary offers assistance: As well as the medical-illness meaning of the term, the dictionary offers a commonsensical, everyday meaning that insane can refer to an absurd or extreme action, such as an "insane scheme for making money." Wikipedia adds that insanity "refers to defective functioning of mental processes such as reasoning." Conversely, the dictionary describes someone sane as "able to anticipate and appraise the effect of one's actions." Does that sound to you like the sorts of behaviors we have been studying? Isn't it the inability to anticipate and appraise that has characterized many of the phenomena in Science, Technology, and Society? Sane people perceive the realities impinging on them, seek information and advice, make sensible interpretations and judgments, and take actions on that basis. Careful thinking and action do not always leads to good outcomes, of course, for unintended consequences and other mistakes are


part of life. Still, if open-minded assessment weighing a diversity of pertinent factors normally would be considered sane, what does that say about the unrealistic, imbalanced appraisals or somnambulistic inattentions that characterize so many technosocial actions? It no doubt would be going too far to equate such failures with full-blown mental illness, but isn't there something rather like insanity going on when billions collaborate in turning their homes into low-level toxic waste dumps? When casual hopes for "progress" become an alternative to intentionally designing appropriate safeguards? When publics and experts alike permit a pace of innovation that is too slow to meet important needs, or a pace too rapid to permit intelligent trial-and-error learning? And who in his or her right mind would allow those with ultimate decision authority in business, government, and the military to operate without transparency and accountability? Would a "sane" civilization operate in any of those ways? Examples of Impaired Sanity What would you think of someone who steals from his or her grandchildren? If not insane, surely such a person would be twisted, depraved, weird, or otherwise deeply troubled. And yet most people are busy perpetrating such thefts every day, without much concern. Worse, many of us actually validate rather than question each other for doing so. A subtle form occurs when roboticists, their business partners, and their customers develop and sell products that change the meaning of warfare and childcare, such that children born a few decades from now may find a world less human than the one you grew up in. A more obvious form of stealing from grandchildren is by depleting resources that ought to be part of the heritage bequeathed to future generations. So much has been said about the world running low on fossil fuels and high on CO2 that I will not even bother to go there; instead, consider the case of helium. That inert gas is used in cooling superconducting magnets in MRI scanners, for arc welding, fiber optics, computer microchip production, rocket fuel,


and other high-technology purposes. The world's largest source of helium is located near Amarillo, Texas, and the resources there will be exhausted in about a decade, according to the U.S. Geological Survey. The world's overall supply of helium is estimated at 30 years. Many such estimates prove incorrect, because technology changes, recycling improves, and prices rise as supplies become tight. Still, from everything presently known, "When we use what has been made (from radioactive decay of uranium and thorium) over the approximate 4.5 billion years the Earth has been around, we will run out," according to chemistry and physics professor Lee Sobotka, of Washington University, St. Louis, a leading expert on helium. Because there is no energetically viable way to make helium, there clearly is a substantial risk of running low on a very important element. Yet amusement parks, florists, and parents routinely use helium to inflate balloons; in fact, at Wal-Mart for about $20 one can purchase a small tank of helium complete with balloons for children's parties. Fun, certainly, but also at least a bit crazy to risk depleting crucial supplies of a scarce resource. A very different lack of realism has been discussed at several points in the semester around the pathologies of brown chemistry. Some of the world's best brains people smart enough to memorize all those diagrams of molecules in organic chemistry textbooks have wantonly assisted in creating and distributing millions of tons of synthetic chemicals. Initially they did not have much of a clue as to what the effects would be and proceeding enthusiastically in the face of profound ignorance is crazy enough. For the past half century, however, since Carson's Silent Spring, a flood of information and understanding has been revealed about transport and uptake of toxics into mothers' milk and into just about everyones bloodstreams (95 percent of Americans are contaminated). The effects on hormone systems, organs, and other physical and emotional functioning are not as well understood, but the basic story is apparent. Despite it all, there has been no shortage of chemists and chemical engineers willing to seek grants for further development of perfluorinated


chemicals, aided by the funding agencies, business executives, and customers. There are increasing restrictions, especially in Europe, but the changes come slowly; and, in the meantime, new chemical threats are created while most old ones continue or even grow worse. Thus you and I continue to purchase microwave popcorn in bags that release PFOAs into our dwelling spaces and bodies, with liver, testicular, and pancreatic cancer found in animals studies, and a recent study suggests links between exposure to PFOAs and reduced fertility in women. The chemicals also may prevent childhood vaccinations against infectious diseases from working properly. Wind currents eventually carry PFOAs and other perfluorinated chemicals into the fatty tissues of polar bears at levels a hundred times that recorded in the average human. Shortsighted and irresponsible, certainly; out of touch with reality, quite possibly. There is another sense in which a substantial fraction of the population is at least partially out of touch with reality: Few appear to realize that they are incapable of thinking realistically about y PFOAs and many other technosocial matters. This is not as severe a misperception as the guy who walks around claiming to be Napoleon, but everyday, widespread unrealism cumulatively is more important because it helps misshape the contexts in which everyone has to live, interact, and learn. People who want to fit in are not entirely to reinterpret and reinvent their understandings of reality. The legacy thinking within the self and emanating from others makes it difficult to reconsider established trajectories of thought and action: Slow down military robotics? Everyone knows thats impossible. The combination of inner and outer legacy thinking adds up to a gentle tyranny suffered by virtually everyone, although to a much greater extent for some. In addition to constrained thinking, people live in a world they mostly inherit from the past, together with one that is modified in the present by others not under ones influence. The man killed riding his bicycle on Hoosick played no role in the design of the street, which was already


maldesigned when he moved to town. Moreover, no one asked his opinion; and if he had given it say to the City Council it would have made no difference because the crummy little bike lanes were a cheap afterthought, and none of the relevant governmental jurisdictions has enough people willing to take a stand for safer bike lanes, or pay for them. If you have not visited The Netherlands, you can see just from photos how different "biking" can be small streets for bicycles that are physically separated from the streets populated by automobiles, trucks, and buses. That safe biking experience did not always exist; the Dutch created it in the mid 20th century in response to high rates of death and injury from automotive collisions with bicyclists. Enough citizens and government officials decided theyd had enough and the carnage had to stop. Might an equivalent kind of change be designed to improve on the coal-fired electricity generating plants that spew into the air many tons of the neurotoxin mercury? Might there be reconsideration of whether it makes sense to ship Fuji Water thousands of miles when water and many other long-distance imports can be obtained locally or regionally? The list of such matters to reconsider is probably not endless, but it is long. Meanwhile, even the affluent and powerful are to a substantial degree trapped or at least severely constrained in a world not of their own design. Given such a situation, it takes heroes like Martha Crouch to stand up and fight. Sometimes they win; more often they lose; but the outcome is in some respects less important than the process. For those who do stand up against the gentle tyranny tear small holes in its fabric. If enough people stand with them, then unthinkable thoughts can become thinkable even for the majority of us who are never going to be heroes; and new actions sometimes can be brought about, as occurred with the Dutch bikeways, the Americans with Disabilities Act, and many other changes that once were not even on the horizon. That is a somewhat encouraging way to end an otherwise discouraging section of the textbook. The choice each student faces is simple, albeit


profound: Do you want to be fully sane, or not? If you choose sanity, then you need to face up to the fact that impaired sanity is so widespread that it is to be found in nodes of our own university, in your hometown, and in your future workplace. That is a huge problem, but it also offers an amazing possibility for transformation: If poor thinking abounds, it should be easy to find something you can challenge!

Envisioning a Saner, Wiser Technofuture

In a sense, the task of this section is already done: Just get rid of the problems outlined above. However valuable that aim, it would be a copout to leave things there a negation rather than a true vision. I seek the equivalent of a mental map to provide a positive sense of direction; and I need it to be emotionally moving enough to reach not just the mind but also the heart. To develop a positive vision of a saner technofuture, a starting point you might consider is simply to declare that you stand for a technological civilization that works for every person. As a realistic idealist, I know that I will never live to see such a world. Indeed, although great improvement surely is possible, it may prove forever impossible to help everyone. Some parents do such a miserable job of raising children, for example, that their offspring are damaged irreparably by the time they grow up. A lot can be done to reduce the extent and frequency of such damage, but no one knows how to arrange for all children to be nurtured in loving circumstances. I acknowledge not only that problem but also every other sensible objection and qualification that skeptics may offer. Nevertheless, I stand for a world that works for everyone, period, end of story; and nothing can move me from that stance. Nor am I alone in that choice and vision, even if the percentages are not very favorable. Your choice once again is simple but profound: Do you choose to bet against current and future humans, or will


you make the seemingly outlandish move of standing for a world that works even for those who cannot protect themselves? Taking an outrageous stance, advocating something that no one knows how to bring about, actually has a certain logic: Nothing less pretty clearly would not be enough to straighten out the amazing combination of suffering, foolishness, interests, wealth, accomplishments, and potentialities characterizing contemporary and future civilization. A bold stand also forces a separation from the herd; each positive, selfless, unconventional thought builds one's own character while reducing the hold of legacy thinking on mind and being. And if one dares actually speak for the presently downtrodden and for unborn victims-to-be, listeners are less able to hide in their own comfy denial-cocoons. They are induced to recognize, at least for a moment, that you are different: You are either nuts or you are a force to be reckoned with. Would you want to be such a force? If so, a place to start is simply by declaring to yourself that you henceforth intend to be a different person one who stands for a fair sharing of the benefits of technoscience worldwide. Speaking, action, and further thinking will follow naturally, albeit episodically and with many compromises and forgettings and temporary giving ups.

Substantive Visioning
More substantively, what might a world look like that "works for everyone"? Being so far away from it, no one now alive is probably in a good position to say with any certainty. Still, some broad outlines are easy to sketch. Unlike scientists who as outsiders to the physical world have to employ supercolliders, gas spectrometers, or complex computer models of atmospheric physics and chemistry, you know from the inside a great deal about what most humans need/want. Not everything, of course, because each culture, subculture, and even individual is somewhat different. Still, you have a great starting point.


Rawls' veil of ignorance is a useful tool to assist. By temporarily pretending not to know what life role one will inhabit, it becomes more possible to clarify some of the main qualities of a world that one would be glad to be born into. My guess is that a great many readers would want a lot of the same things in regard to food, health, housing, transport, environment, work, leisure, friendship, and other facets of life. I will just be able to touch the surface of a few of those categories here, but I hope it will be enough to enable you to fill in the blanks and to create your own vision of a preferred alternative world. Food, Clothing, Shelter Starting with the basics food, clothing, shelter does anyone really doubt that readily imaginable business arrangements could build enough houses, process enough textiles/shoes/coats, and grow enough food to provide every human with a decent basic existence? If you really think those things have to remain scarce, you must be looking at something very different from what I see. U.S. and European grocery stores have tripled in size over the past half century, the shelves are instantly restocked with mundane and exotic foods from around the globe, and agribusiness actually is paid not to grow certain crops, because overabundance could depress prices. Meanwhile, irrigation and crop hybridization and other agricultural arrangements are so primitive in many poor countries that growing and processing of foods high in protein, vitamins, and other desirable properties are half or less of what easily could become achievable. Factory-built housing with entire modules of plumbing and electricity pre-installed can be delivered for a substantially lower price than "stick-built" housing done on site by carpenters, plumbers, and others in the construction trades. There is not enough capacity for fabricating such housing in the world at this point, but if there is one thing that engineers and their associates know how to do it is to build factories, stock them with appropriate machinery, and optimize manufacturing processes.


And if you have been to a mall lately, have you seen clothing stores with lots of empty shelves? The median American female discards something like 30 articles of clothing a year, long before most items are actually worn out. The world is awash in production capacity, and "humanity" has or can make plenty of clothing to dress poor families quite well. For these and other basics such as routine medicines the problem is not production constraints, it is distribution. A conventional response is to say, "Yes, but those in need cannot afford to buy or invest or conduct necessary research." True enough. So why not envision a world where the affluent organize to help the non-affluent achieve the basic necessities? Too big a task, too great a sacrifice? I am pretty sure that the barriers are primarily in minds and priorities; most people just do not think about it or believe it to be possible, so it isn't. How do you look at the matter, do you lean toward agreeing with the "It's impossibles"? Engage with me in a small retrospective thought experiment: Suppose that just half the monies expended on wars that a majority disapprove, plus half the funds devoted to weapons of mass destruction, plus a quarter of all advertising, plus the grossest ten percent of overconsumption by the affluent for the past half century had instead been devoted to the grand quest of feeding, clothing, and housing every human. How would those accounts balance, do you suppose? In contemporary dollars, I guesstimate that there would have been roughly $25 trillion freed up. Assume 20 percent of that goes for administrative overhead, gets stolen or misused, or otherwise does not serve its intended function. Focusing on the poorest billion persons, assuming family size averaging five (probably too low), that would translate into 200 million families sharing $20 trillion. That amounts to $100,000 per family. Not munificent, by but probably enough to replace every unacceptable dwelling on the planet while feeding and clothing everyone adequately. (I do not actually believe that just divvying up money is the best way to proceed, so the dollar figures are primarily an illustration of the capacities for action.) Looking ahead instead of backward, the world economy is much larger than it was, so a smaller fraction of the economic surplus would be needed to accomplish the same grand humanitarian task. Spread it out over a century


if you are still concerned about limitations on funds. I disagree that such a long period would be necessary, but I won't quibble because the main point here is to get you to consider whether "impossible tasks" truly are impossible. I do not see how any fair-minded analysis of available versus needed funds could support the thesis that basic necessities are impossible. If my calculations are roughly accurate, that brings up another simple but profound choice: Do you choose to side with those who say, in essence, "What's mine is mine, build weapons of mass destruction, fight wars disapproved by a majority of taxpayers, and allow businesses to flood minds with advertising designed to deceive viewers into believing that buying more will make already-affluent people happier"? Or do you choose to side with those who say that present priorities are screwed up, and that it's past time to behave decently to those who could have been you and me if we'd been born into different families? For me that is an easy call; what I completely fail to understand is why so few apparently see it as a call at all. A more detailed and sophisticated analysis would go on to consider medicine, schooling, and additional facets of life. But the same basic analysis would prevail: Just as technoscientists can achieve amazing results by devoting sufficient skilled energy to seemingly impossible tasks, so social innovations can achieve amazing results if enough people had half the commitment to social innovation that many scientists and engineers bring to their inquiries and endeavors.

Jobs and Work

To switch to a very different facet of life in a saner civilization, consider the challenge of arranging jobs and workplaces. There is approximately 50% unemployment for recent college graduates in Spain as I write, and overall unemployment there stands above 20 percent. Greece is almost as bad off, and the poorest worldwide chronically confront horrendous competition for an inadequate number of jobs. Even in the U.S., official figures depict over ten percent unemployed or under-employed, and it swells to nearly twice that percentage when those who have grown dispirited and stopped actively looking for work are included in the count.


There are two or three conventional responses. One is to accept booms and busts as an inevitable part of a market-oriented economy. A second is to blame whoever is in high governmental office for the problems (even though there is very little such schmoes can do about unemployment). Third is to call for "getting the economy moving again," which business executives often use as a rationale for cutting taxes, increasing subsidies, and reducing government regulations pertaining to environment, workers' on-the-job rights, and other inconvenient features of doing business which, partly rightly, are blamed for hesitancy about creating new jobs. It's an impossible situation when viewed through conventional lenses. So why not use an unconventional one?: People in the U.S. work too much, on average, compared with European countries and with less industrialized cultures. So why not share the available work among all those who need it? Cut working hours by ten or twenty percent for those who are fully employed, and hire everyone who really wants to work. Voil, full employment! More time for the too-busy to spend with family and friends, develop their gardens, read books, watch football. They would have to take pay cuts to make the scheme work, of course, and many would resist. You might be surprised, though, to learn how many workers say in surveys that they would trade lower pay for more time. Of course there are complications, such as the fact that U.S. employers would have to pick up the tab for medical insurance for more employees. (But this is not true in most countries, where businesses do not pay for medical coverage and everyone is covered through government-paid plans.) Training new workers is not free, moreover, and the un- and underemployed surely are not trained for all job niches neurosurgery, not for amateurs. But do not get trapped in the details of implementation, if you can help it. The core task is to say that work for everyone might be made a very high priority, and the idea of sharing available work instead of letting the lucky ones hoard it is an option that may be better than the alternatives. The details of everything get messy, but that applies to the status quo just about as much as it does to proposed innovations; so be wary of those who point out the problems that could accompany change, yet fail to point out problems accompanying present ways of doing things.


A second conundrum about jobs is what to do about the fact that some are a lot more interesting and otherwise attractive than others. The same strategy discussed above might be applied, with a twist. Assume a spectrum of jobs with different degrees of attractiveness, including some that are just boring and some that are absolutely obnoxious. Members of a sane civilization, I hope, would deploy technological innovation or other strategies to reduce the unattractive features wherever feasible. But assuming that some or many unattractive work roles remain indispensable, what is to be done? Suppose that everyone had to spend a few years in one of the less desirable jobs? The same total hours of undesirable work would get performed, but it would be shared widely instead of concentrated on a minority of unfortunates. Suppose every 18-year-old owed two years of service, or four years, or whatever; and, in return, each came out of the period with a voucher for college tuition or for a down payment on a home. Instead of having 56-year-old custodians moving heavy refrigerators from appliance stores to homes, what if 20-year-olds were doing it? Meanwhile, the older workers are doing other, more interesting jobs, perhaps working their way up in ways very similar to the ones now prevailing, or perhaps there are much better ways to arrange work and workplaces. That is a more complex discussion that I leave aside for now; I just want to suggest that there may be so many things wrong or suboptimal with contemporary work that the topic deserves extended inquiry, debate, and experimentation. Again, the main point is simply to ask whether you would prefer to a world where each person got to share in some of the good jobs and in some of the bad jobs. I believe that would be fairer, and I believe that more of the assertive people who speak persuasively and stand up to authority would be driven by our unpleasant experiences to devote energy to reducing the grosser deficiencies in the less-desirable jobs. Over time, it would be reasonable to expect jobs overall becoming less noxious. As things now stand, you and I have essentially bought our ways out of the worst jobs, or our families have. We were aided by our academic successes, of course, and by our persistence and other character traits. Many of us also were assisted by student loans (in my case nearly half a million dollars in government-paid tuition and fellowship aid, which was subsidized partly by taxes on convenience store clerks, garbage collectors, and others doing jobs


I would hate). I owe them, and under present arrangements there really is no way to make appropriate recompense. At age 18-25 or so, I could have done one of those jobs or rotated among several, freeing at least one person otherwise stuck in a lousy, dead-end position to get training and move into something more interesting and probably more lucrative for the rest of his or her working life. In sum, every example one examines reveals pieces of what could add up to one or more alternative modernities, technological civilizations that arguably would be superior, perhaps vastly superior. This section's purpose will have been fulfilled if you have grasped the simple, but earthshaking point: There are big choices to be made both in one's stance toward how the future should look, and in relatively concrete decisions about how to distribute the benefits of technosocial civilization fairly and wisely.

The Cases, and Some of What They Taught

The course began with a military robotics reading to introduce a central idea of social science: There rarely if ever is a technological action that benefits everybody. Military technologies are an especially good example, because they usually are deliberately designed for hurting others. "We" do not benefit from military robotics; some people do. Later cases showed that technological innovation of many kinds benefits some, harms some, and has no effect on others. The Pinto's exploding gas tank offered a related insight: Key decision makers at Ford took an initiative without outside knowledge or approval. Later readings referred to this as being part of the privileged position of business. It is normal, and yet it is highly problematic, because insiders like Lee Iacocca typically will have different motivations from those of outsiders. This raises a difficult question: How can business executives be incentivized and constrained so that they reduce the harms they cause to the lowest feasible level? Later cases such as that of the cane toads illustrated how primary unintended outcomes can ramify into secondary and tertiary consequences and beyond, sometimes in a never-ending, even cascading sequence. For not only did the cane toads enjoy incredible, unwanted success in the eastern


Australian ecosystem, but they became cultural icons in Queensland, were made into objects for tourists to purchase, and ultimately became audiovisual aids for a professor teaching about unintended consequences. Unlike the cane toads, the Pinto case had another, darker side. At some point in the development and production of the vehicle, it became apparent to Ford engineers on the project that there was a problem with the gas tank. Yet they did not rectify it for many years, apparently because higher-level executives were unwilling to spend the small sum required. Thus, the exploding gas tank shifted from an unintended consequence to an intended one or at least to one that was not unintended, if there can be such an in-between zone. How to protect against malevolent innovation, or at least knowingly dangerous innovation? We did not focus on it at the time, but the Pinto case actually offered a third central insight: Decisions about innovation usually occur within complex organizations corporation, government, university and hence any effort to shape innovation in wiser, more benevolent directions must somehow reach into the organizations responsible. How to do that became a third main question to be addressed. Likewise involving inside-the-corporation incentives and behaviors are the cases of undone innovation. The lack of vaccines to protect against many tropical diseases is due partly to the fact that discretion over whether to fund the necessary R&D is left up to pharmaceutical CEOs and their subordinates. This seems entirely natural in a market-oriented economy, but in fact there is nothing natural about it it is a choice made by people. Governments do mandate certain types of corporate behavior regarding accounting practices, for example and, in principle, governments could require businesses to engage in more public-regarding activities (by threatening, at an extreme, to withdraw the corporation's charter to do business). It might be unwise to do so, or it might be brilliant; but it is possible. How to induce technoscientists and businesses to provide more of the "right kind" of innovations therefore is another central question facing those who seek a more intelligent civilization.


Each case of undone innovation reveals slightly different facets of the problem. Green chemists have to operate in the face of incredible technological momentum, accumulated over the course of a century of brown chemistry where avoiding toxicity was not one of the design criteria. University chemistry and chemical engineering departments do not teach much green chemistry. Chemical companies have enormous sunk costs in brown chemical plant and equipment. Purchasers of chemicals know very little about chronic health effects, and have little incentive to worry about runoff into water supplies, distribution via air currents, or other long-term, indirect harms that do not especially hurt the person actually using the chemical product. Journalists are not trained to write about chemistry, and media audiences are not very interested in the subject unless there are masses of stinking dead fish or children with leukemia. Geothermal energy is mostly undone, too, but not primarily for any of those reasons. There is of course a status quo with some momentum favoring natural gas and fuel oil for heating, and electric air conditioning for cooling. But all the necessary machinery is readily available for a shift to geothermal. What is lacking is that those constructing new homes and office buildings do not choose to install geothermal. No doubt this sometimes is due to ignorance, but even knowledgeable developers might well choose to skip geothermal because the up-front costs are higher. Those dwelling in the buildings would reap considerable financial gains over the life cycle of the geothermal system, and there would be a collective cut in use of fossil fuels. But it is not the second, third, or fourth purchaser who is making the initial choice; nor is it "humanity." Developers rightly or wrongly believe that initial purchasers (and their banks) prefer to save the $15,000 that geothermal would add to the cost of a new home. This case reveals a fourth central design goal for a more intelligent civilization: How to get long-term considerations taken into account when making short-term choices? That same question arises in decision making about epochal technologies such as robotics that could radically change everyday life. Those pursuing the innovations tend to be excited about their work, focus on the possible benefits more than on the possible risks, and have career incentives to continue doing what they are doing. One can understand all that, and even applaud it. I have no quarrel with technoscientists who are enthusiastic


about robotics, synthetic biology, human enhancement, or other epochal technologies. The problem arises from the fact that there is inadequate contestation between advocates, opponents, and the great majority in between. Media tend to emphasize gee-whiz aspects of nano- technology, for example, together with touting the economic benefits for job creation that could come from California beating out other states, say, or the U.S. beating Japan and the EU. Even when governments do take up the issues and they almost always have too little debate, too late there rarely is a level playing field. The advocates predominate, in part because technoscientific R&D proceeds outside of public scrutiny and accountability, and in part because the privileged positions of business and science mean that new knowledge gets swept up and turned into a profit. A key to doing better is to arrange a fairer debate, and doing that means figuring out how to get the negative case represented better and earlier. At present, there is the equivalent of a courtroom without a defense attorney.

Final Thoughts
I expect most readers agree on the general goal of using technoscientific capacities to promote genuine human progress, but there is bound to be disagreement about specific steps toward that goal. There can be no single, correct definition of what progress should or could mean, and each reader may by this point have a somewhat different way of thinking about the design challenges and ways of addressing them. To reiterate, I have argued that a better future for technological civilization requires the following: 1. Protect against unintended consequences. 2. Protect against malevolence and willful disregard of public safety, including acts that would undermine important intangibles such as community and other valued aspects of existing cultures. 3. Induce businesses to undertake more public-regarding innovations sooner. 4. Induce technoscientists to devote a higher fraction of their efforts to activities likely to benefit a broader swatch of humanity.


5. Induce university instructors to teach more of what students need to know to become helpful, ethically responsible professionals (e.g., green chemists, civil engineers capable of speaking out in public as democratic experts who see part of their duty as explaining the necessity of better maintainance of bridges and other infrastructure, manufacturing engineers who make artifacts more durable and easier to repair and recycle, .... the list is long). 6. Keep the pace of innovation slow enough to allow government, media, and public to participate meaningfully in shaping the future. 7. Protect against unfairness. Extend the spirit of the Americans with Disabilities Act by reducing other barriers to enjoying the benefits of technology. 8. Put more attention on political and economic innovation, to create a better match between the technoscientists' innovative capacities and the social system's steering capacities. 9. Figure out how to help purchasers meet their own needs without disadvantaging others. Perhaps most crucially, to preserve the great achievements of technoscientific civilization while improving on the shortcomings, I believe there needs to be a concerted challenge to legacy thinking. It simply is not true to say that things have to be the way that they are. Without denying that there are terrible problems in the world, without claiming that anyone has all the answers, one can nevertheless envision a far more intelligent and fairer technological civilization. I hope you will agree that this book has made a beginning on that effort.