Sunday, October 2, 2011

Neal Stephenson on Innovation

The following is the entire article quoted (this is in the event that the origin site takes it down or doesn't archive it properly--assuming blogger doesn't do the same of course):

My lifespan encompasses the era when the United States of America was capable of launching human beings into space. Some of my earliest memories are of sitting on a braided rug before a hulking black-and-white television, watching the early Gemini missions. This summer, at the age of 51—not even old—I watched on a flatscreen as the last Space Shuttle lifted off the pad. I have followed the dwindling of the space program with sadness, even bitterness. Where’s my donut-shaped space station? Where’s my ticket to Mars? Until recently, though, I have kept my feelings to myself. Space exploration has always had its detractors. To complain about its demise is to expose oneself to attack from those who have no sympathy that an affluent, middle-aged white American has not lived to see his boyhood fantasies fulfilled.

Still, I worry that our inability to match the achievements of the 1960s space program might be symptomatic of a general failure of our society to get big things done. My parents and grandparents witnessed the creation of the airplane, the automobile, nuclear energy, and the computer to name only a few. Scientists and engineers who came of age during the first half of the 20th century could look forward to building things that would solve age-old problems, transform the landscape, build the economy, and provide jobs for the burgeoning middle class that was the basis for our stable democracy.

The Deepwater Horizon oil spill of 2010 crystallized my feeling that we have lost our ability to get important things done. The OPEC oil shock was in 1973—almost 40 years ago. It was obvious then that it was crazy for the United States to let itself be held economic hostage to the kinds of countries where oil was being produced. It led to Jimmy Carter’s proposal for the development of an enormous synthetic fuels industry on American soil. Whatever one might think of the merits of the Carter presidency or of this particular proposal, it was, at least, a serious effort to come to grips with the problem.

Little has been heard in that vein since. We’ve been talking about wind farms, tidal power, and solar power for decades. Some progress has been made in those areas, but energy is still all about oil. In my city, Seattle, a 35-year-old plan to run a light rail line across Lake Washington is now being blocked by a citizen initiative. Thwarted or endlessly delayed in its efforts to build things, the city plods ahead with a project to paint bicycle lanes on the pavement of thoroughfares.

In early 2011, I participated in a conference called Future Tense, where I lamented the decline of the manned space program, then pivoted to energy, indicating that the real issue isn’t about rockets. It’s our far broader inability as a society to execute on the big stuff. I had, through some kind of blind luck, struck a nerve. The audience at Future Tense was more confident than I that science fiction [SF] had relevance—even utility—in addressing the problem. I heard two theories as to why:

1. The Inspiration Theory. SF inspires people to choose science and engineering as careers. This much is undoubtedly true, and somewhat obvious.

2. The Hieroglyph Theory. Good SF supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place. A good SF universe has a coherence and internal logic that makes sense to scientists and engineers. Examples include Isaac Asimov’s robots, Robert Heinlein’s rocket ships, and William Gibson’s cyberspace. As Jim Karkanias of Microsoft Research puts it, such icons serve as hieroglyphs—simple, recognizable symbols on whose significance everyone agrees.

Researchers and engineers have found themselves concentrating on more and more narrowly focused topics as science and technology have become more complex. A large technology company or lab might employ hundreds or thousands of persons, each of whom can address only a thin slice of the overall problem. Communication among them can become a mare’s nest of email threads and Powerpoints. The fondness that many such people have for SF reflects, in part, the usefulness of an over-arching narrative that supplies them and their colleagues with a shared vision. Coordinating their efforts through a command-and-control management system is a little like trying to run a modern economy out of a Politburo. Letting them work toward an agreed-on goal is something more like a free and largely self-coordinated market of ideas.

SPANNING THE AGES

SF has changed over the span of time I am talking about—from the 1950s (the era of the development of nuclear power, jet airplanes, the space race, and the computer) to now. Speaking broadly, the techno-optimism of the Golden Age of SF has given way to fiction written in a generally darker, more skeptical and ambiguous tone. I myself have tended to write a lot about hackers—trickster archetypes who exploit the arcane capabilities of complex systems devised by faceless others.

Believing we have all the technology we’ll ever need, we seek to draw attention to its destructive side effects. This seems foolish now that we find ourselves saddled with technologies like Japan’s ramshackle 1960’s-vintage reactors at Fukushima when we have the possibility of clean nuclear fusion on the horizon. The imperative to develop new technologies and implement them on a heroic scale no longer seems like the childish preoccupation of a few nerds with slide rules. It’s the only way for the human race to escape from its current predicaments. Too bad we’ve forgotten how to do it.

“You’re the ones who’ve been slacking off!” proclaims Michael Crow, president of Arizona State University (and one of the other speakers at Future Tense). He refers, of course, to SF writers. The scientists and engineers, he seems to be saying, are ready and looking for things to do. Time for the SF writers to start pulling their weight and supplying big visions that make sense. Hence the Hieroglyph project, an effort to produce an anthology of new SF that will be in some ways a conscious throwback to the practical techno-optimism of the Golden Age.

SPACEBORNE CIVILIZATIONS

China is frequently cited as a country now executing on Big Stuff, and there’s no doubt they are constructing dams, high-speed rail systems, and rockets at an extraordinary clip. But those are not fundamentally innovative. Their space program, like all other countries’ (including our own), is just parroting work that was done 50 years ago by the Soviets and the Americans. A truly innovative program would involve taking risks (and accepting failures) to pioneer some of the alternative space launch technologies that have been advanced by researchers all over the world during the decades dominated by rockets.

Imagine a factory mass-producing small vehicles, about as big and complicated as refrigerators, which roll off the end of an assembly line, are loaded with space-bound cargo, and topped off with non-polluting liquid hydrogen fuel, then exposed to intense concentrated heat from an array of ground-based lasers or microwave antennas. Heated to temperatures beyond what can be achieved through a chemical reaction, the hydrogen erupts from a nozzle on the base of the device and sends it rocketing into the air. Tracked through its flight by the lasers or microwaves, the vehicle soars into orbit, carrying a larger payload for its size than a chemical rocket could ever manage, but the complexity, expense, and jobs remain grounded. For decades, this has been the vision of such researchers as physicists Jordin Kare and Kevin Parkin. A similar idea, using a pulsed ground-based laser to blast propellant from the backside of a space vehicle, was being talked about by Arthur Kantrowitz, Freeman Dyson, and other eminent physicists in the early 1960s.

If that sounds too complicated, then consider the 2003 proposal of Geoff Landis and Vincent Denis to construct a 20-kilometer-high tower using simple steel trusses. Conventional rockets launched from its top would be able to carry twice as much payload as comparable ones launched from ground level. There is even abundant research, dating all the way back to Konstantin Tsiolkovsky, the father of astronautics beginning in the late 19th century, to show that a simple tether—a long rope, tumbling end-over-end while orbiting the earth—could be used to scoop payloads out of the upper atmosphere and haul them up into orbit without the need for engines of any kind. Energy would be pumped into the system using an electrodynamic process with no moving parts.

All are promising ideas—just the sort that used to get an earlier generation of scientists and engineers fired up about actually building something.

But to grasp just how far our current mindset is from being able to attempt innovation on such a scale, consider the fate of the space shuttle’s external tanks [ETs]. Dwarfing the vehicle itself, the ET was the largest and most prominent feature of the space shuttle as it stood on the pad. It remained attached to the shuttle—or perhaps it makes as much sense to say that the shuttle remained attached to it—long after the two strap-on boosters had fallen away. The ET and the shuttle remained connected all the way out of the atmosphere and into space. Only after the system had attained orbital velocity was the tank jettisoned and allowed to fall into the atmosphere, where it was destroyed on re-entry.

At a modest marginal cost, the ETs could have been kept in orbit indefinitely. The mass of the ET at separation, including residual propellants, was about twice that of the largest possible Shuttle payload. Not destroying them would have roughly tripled the total mass launched into orbit by the Shuttle. ETs could have been connected to build units that would have humbled today’s International Space Station. The residual oxygen and hydrogen sloshing around in them could have been combined to generate electricity and produce tons of water, a commodity that is vastly expensive and desirable in space. But in spite of hard work and passionate advocacy by space experts who wished to see the tanks put to use, NASA—for reasons both technical and political—sent each of them to fiery destruction in the atmosphere. Viewed as a parable, it has much to tell us about the difficulties of innovating in other spheres.

EXECUTING THE BIG STUFF

Innovation can’t happen without accepting the risk that it might fail. The vast and radical innovations of the mid-20th century took place in a world that, in retrospect, looks insanely dangerous and unstable. Possible outcomes that the modern mind identifies as serious risks might not have been taken seriously—supposing they were noticed at all—by people habituated to the Depression, the World Wars, and the Cold War, in times when seat belts, antibiotics, and many vaccines did not exist. Competition between the Western democracies and the communist powers obliged the former to push their scientists and engineers to the limits of what they could imagine and supplied a sort of safety net in the event that their initial efforts did not pay off. A grizzled NASA veteran once told me that the Apollo moon landings were communism’s greatest achievement.

In his recent book Adapt: Why Success Always Starts with Failure, Tim Harford outlines Charles Darwin’s discovery of a vast array of distinct species in the Galapagos Islands—a state of affairs that contrasts with the picture seen on large continents, where evolutionary experiments tend to get pulled back toward a sort of ecological consensus by interbreeding. “Galapagan isolation” vs. the “nervous corporate hierarchy” is the contrast staked out by Harford in assessing the ability of an organization to innovate.

Most people who work in corporations or academia have witnessed something like the following: A number of engineers are sitting together in a room, bouncing ideas off each other. Out of the discussion emerges a new concept that seems promising. Then some laptop-wielding person in the corner, having performed a quick Google search, announces that this “new” idea is, in fact, an old one—or at least vaguely similar—and has already been tried. Either it failed, or it succeeded. If it failed, then no manager who wants to keep his or her job will approve spending money trying to revive it. If it succeeded, then it’s patented and entry to the market is presumed to be unattainable, since the first people who thought of it will have “first-mover advantage” and will have created “barriers to entry.” The number of seemingly promising ideas that have been crushed in this way must number in the millions.

What if that person in the corner hadn’t been able to do a Google search? It might have required weeks of library research to uncover evidence that the idea wasn’t entirely new—and after a long and toilsome slog through many books, tracking down many references, some relevant, some not. When the precedent was finally unearthed, it might not have seemed like such a direct precedent after all. There might be reasons why it would be worth taking a second crack at the idea, perhaps hybridizing it with innovations from other fields. Hence the virtues of Galapagan isolation.

The counterpart to Galapagan isolation is the struggle for survival on a large continent, where firmly established ecosystems tend to blur and swamp new adaptations. Jaron Lanier, a computer scientist, composer, visual artist, and author of the recent book You are Not a Gadget: A Manifesto, has some insights about the unintended consequences of the Internet—the informational equivalent of a large continent—on our ability to take risks. In the pre-net era, managers were forced to make decisions based on what they knew to be limited information. Today, by contrast, data flows to managers in real time from countless sources that could not even be imagined a couple of generations ago, and powerful computers process, organize, and display the data in ways that are as far beyond the hand-drawn graph-paper plots of my youth as modern video games are to tic-tac-toe. In a world where decision-makers are so close to being omniscient, it’s easy to see risk as a quaint artifact of a primitive and dangerous past.

The illusion of eliminating uncertainty from corporate decision-making is not merely a question of management style or personal preference. In the legal environment that has developed around publicly traded corporations, managers are strongly discouraged from shouldering any risks that they know about—or, in the opinion of some future jury, should have known about—even if they have a hunch that the gamble might pay off in the long run. There is no such thing as “long run” in industries driven by the next quarterly report. The possibility of some innovation making money is just that—a mere possibility that will not have time to materialize before the subpoenas from minority shareholder lawsuits begin to roll in.

Today’s belief in ineluctable certainty is the true innovation-killer of our age. In this environment, the best an audacious manager can do is to develop small improvements to existing systems—climbing the hill, as it were, toward a local maximum, trimming fat, eking out the occasional tiny innovation—like city planners painting bicycle lanes on the streets as a gesture toward solving our energy problems. Any strategy that involves crossing a valley—accepting short-term losses to reach a higher hill in the distance—will soon be brought to a halt by the demands of a system that celebrates short-term gains and tolerates stagnation, but condemns anything else as failure. In short, a world where big stuff can never get done.

No comments: