They say history is written by the victors. Today, meet four men whose big predictions in the 1990s earned them big reputations, but who turned out to be totally off track. Will they admit how wrong they were?
To err is human, the saying goes, so let us meet a few men who have proven themselves all too human. Now that the great certainties of the last decade have vanished into the ether, it’s worth taking a leisurely stroll through the graveyard of dead ideas. The bodies are still fresh, and they have much to say.
At the moment, the four men interviewed here are defined by their wrongness — more charitably, they have become sharply out of phase with general opinion. What does it mean to them to be wrong? How do they spend their days, branded with the scarlet W? Have they made an easy transition from celebrity to disfavour? Do they even know they were wrong?
Such men (and males still hold the lion’s share of wrong ideas) are not hard to locate. In a short period beginning at the end of the 1980s, the world changed so dramatically, so fast, that it seemed we needed a whole new set of ideas. Many were prophetic, and many were wrong. The global economy, the role of nations, the stock market, the way we communicate — all needed new interpretations and explanations, so it was a great time to put yourself at the reins of a speedy stallion and head straight for the nearest cliff.
A wrong idea looks an awful lot like a right one, especially to its author. History, it is said, is written by the winners. But most of us, in our day-to-day lives, are propelled along by popular thinkers and ideas that ultimately prove to be losers in the war of ideas.
This is not a random selection: These men held arguably the four most popular wrong ideas of the last 10 years. To look back earlier would be unfair: Over a lengthy enough span, all ideas are wrong — and may prove to be right again in the future. We could pick on Alvin Toffler for telling us in the 1970s that we’d soon all be wearing disposable clothes, or on Vladimir Lenin for saying revolution would lead to utopia. But that would be too easy.
Nor are we looking at the little technical wrongs, the folks who brought us “cold fusion” in the early 1990s, or the marketers of New Coke, or those who came up with the Reform Party. We’re looking, instead, at the men who stirred millions of people to buy their ideas (and usually the books that contained them) — who got a lot of us to share in their wrongness. Admit it: You probably held at least one of their ideas, or spoke highly of one of their books, for a while at least.
How does one deal with being wrong? One strategy — used by at least two of our interview subjects — is to claim you will soon be right. Ideas take time, you know. This is the strategy of scientist Paul Ehrlich, who has been predicting since 1968 that a “population bomb” will wreak havoc in the developing world. Luckily for the poor, and unluckily for Mr. Ehrlich, population growth has been levelling off instead. He keeps on predicting, though, and someday he may become right.
Even better, you can claim that your idea would be correct, if only it were implemented fully. Advocates of socialism used to claim the reason their philosophy had led to brutality and suffering was because “actually-existing socialism” was never sufficiently pure. Ironically, the same argument can now be heard from advocates of capitalism. This premise has the virtue of being unprovable.
Or you can point out, as most of our subjects do, that not everything in your argument was incorrect. Canada was seduced in the mid-1990s by a profoundly wrong bestseller titled Boom, Bust and Echo that claimed that the pseudo-science of demographics explains “two-thirds of everything.” If the authors were wrong — as they were about coming popular trends, stock prices and politics alike — it must have been that other third’s fault.
The other option, of course, is to admit your mistake and move on. But these men, none of them specialists, are drawn from the popular-thought mills of the 1990s, a confluence of publishing, celebrity and publicity that tends to wed people firmly to their ideas. Samuel Johnson once said he knew a man “who had only one idea, and that idea was wrong.” But if he confessed it, who’s to say he’d ever be allowed another idea?
And what is wrong, after all? Even the wrongest of these ideas is built on a very solid foundation of correct observations by an intelligent person working in good faith. That three of our four subjects firmly believe they are still right is not surprising, since the foundation of truth is always shifting. Before the Enlightenment, it was easier to sort right from wrong ideas, since God and Nature set the scales of truth; in recent centuries we have had to go it alone.
These men are wrong, then, because we have collectively declared them wrong — so they can certainly argue their cases. We could very well be wrong ourselves. It seems to be in the nature of modern life that we are all, by turns, very right and deeply wrong, sometimes at the same time. All of us, including our four guests, could have been described by John Dryden 300 years ago:
A man so various
that he seem’d to be
Not one, but all
Stiff in opinions,
always in the wrong;
Was everything by starts,
and nothing long:
But, in the course of
one revolving moon,
Was chemist, fiddler,
statesman, and buffoon.
Mr. Dow 36,000: James Glassman
What he was wrong about: That the stock market was as safe as cash
A morning with James Glassman is a plunge into the deepest reservoir of optimism. Mr. Glassman is an easygoing baby boomer with silvering hair and a firm handshake, and he is fully aware that most people in the world now consider his ideas utterly wrong.
As the co-author and public face of the 1999 bestseller Dow 36,000: The New Strategy for Profiting From the Coming Rise in the Stock Market, Mr. Glassman built his name on the book’s bold proposition that the Dow Jones Industrial Average, then approaching its historic peak just short of 12,000 points, would triple within five years.
The idea was part of a theory and worldview that encapsulated popular economic thought during the 1990s.
“Stocks,” wrote Mr. Glassman and his partner, economist Kevin Hassett, “are now, we believe, in the midst of a one-time-only rise to much higher ground — to the neighborhood of 36,000 for the Dow Jones industrial average. After they complete this historic ascent, owning them will still be profitable, but the returns will decline… . In the meantime, however, astounding profits will be made.”
In the history of edible words, these would have to be the linguistic equivalent of the seven-course haute cuisine feast served in the first-class dining salon of the Titanic, with extra gravy and a little bowl of sorbet to wash it all down.
Three years later, when the Dow has plummeted toward 7,000 and the stock boom of the 1990s is widely considered a speculative bubble, Mr. Glassman’s book has become something of a punch line. The number in the title was off by a digit, joked the liberal economist Paul Krugman in the New York Times: “Let’s hope it was an extra 3, not an extra zero.”
But Mr. Glassman is unruffled. “I know it’s controversial. It’s always been controversial. But I believe it’s fundamentally correct,” he says as he orders a coffee at a slightly bohemian java joint near his apartment on Manhattan’s Upper East Side, a couple of blocks from Central Park. (He also maintains an address in Washington, where he writes a column for the Washington Post, sits on the right-leaning American Enterprise Institute and runs his own libertarian think tank.)
So he wasn’t wrong? “I don’t think anything in it was wrong,” he says, after only the slightest pause. “But in retrospect” — he chuckles — “our timing was awful, and I wouldn’t have been so specific in saying was going to hit 36,000 by a certain date.”
But that, he insists, was all they got wrong. Not the Dow 36,000 theory itself, which holds that stocks are fundamentally safer than government bonds in the long run, and that the market has a lot to gain because the bloated price-to-earnings ratios of the 1990s (exceeding 20-to-1) were, in fact, far too low.
This, he says, is because companies will soon be valued based not on their balance sheets (the old way, discarded by almost everyone in the 1990s), but on the eventual return their stocks will pay to investors in dividends over the company’s entire lifespan. Under this theory, Mr. Glassman and Mr. Hassett calculated that the markets wouldn’t be fully valued until the index hit the 36,000 mark.
Of course, it is very likely that the Dow will someday hit 36,000. The index of blue-chip stocks was launched at 100 shortly after the First World War, and still hovered below 1,000 in the early 1980s. As Mr. Glassman points out as often as he can these days, stocks have proven a good long-term investment during the past century.
On the other hand, the long term is sometimes very, very long: If you had held on to a typical blue-chip portfolio through the 1929 crash, it would’ve taken you almost 30 years to get your money back. Mr. Glassman’s new corrected time frame is not exactly welcome news to the hundreds of thousands of people who bought his book, many of whom were middle-aged and may have based their retirement-saving strategies on its principles.
Still, Mr. Glassman sanctified the investment miracles of the 1990s, but he didn’t provoke them; the end of the boom occurred only months after Dow 36,000 hit the stands. Mr. Glassman says his own portfolio has lost about a third of its value. His own fortune wasn’t made investing, in any case, but by selling his founding stake in the Washington newspaper Roll Call in the 1980s.
The portfolio he recommends in Dow 36,000 (one that, wisely, did not contain a single dot-com company, but did include vapourizing Cisco Systems shares) has declined by a similarly hefty margin.
“Everything that’s happening now was anticipated, and taken into account, in the book,” he says. While it’s true that the authors named market downturns and terrorist attacks as potential hazards, many investors will raise an eyebrow at his assertion that the equity runup of the 1990s was not actually a bubble.
“I can’t be sure, but there is a lot of concrete evidence that there was no bubble,” says Mr. Glassman. “To have a true bubble, you have to have a lot of people investing in something that really has no value behind it at all.”
He smiles again, predicts that the golden 36,000 mark could become reality within a decade, and announces that he has a plane to Italy to catch.
He is off to expound some of his wrong ideas at one of the many conferences and speaking engagements that fill his very successful life.
Mr. End of Work: Jeremy Rifkin
What he was wrong about:
That jobs were going out of style
Seven years after he declared employment a thing of the past in the pages of The End of Work, Jeremy Rifkin doesn’t hide from his anachronistic book. He surrounds himself with it: The walls of his modest office, in his Washington think tank, are lined with scores of copies, in a dozen languages.
“Outside of North America, this is still considered a very important book,” he says.
A huge non-fiction publishing success in 1995, The End of Work was a months-long bestseller in North America and an even greater hit overseas. It spawned an entire movement in Italy. In France, where Mr. Rifkin is now a household name, it led to novel legislation creating a mandatory four-and-a-half-day work week.
Mr. Rifkin’s ideas also had some real currency in the Clinton White House. But today, the big white building down the street from his office might as well be in Australia.
A lot has changed since the deep recession of the early 1990s, when The End of Work was conceived. Then, unemployment was locked in the double digits, economists predicted that even “full employment” would mean six per cent joblessness. Technology, many said, would only serve to eliminate jobs and banish us all to “electronic sweatshops.” A technology-led employment boom, with a lot of good jobs, was beyond comprehension.
“In the years ahead,” Mr. Rifkin began his book, “new, more sophisticated software technologies are going to bring civilization ever closer to a near-workerless world.”
That doesn’t sound as crazy now as it did, say, when James Glassman was writing Dow 36,000 in 1999 — a point when unemployment in the U.S. had spent years hovering around 4 per cent, a level Mr. Rifkin and most economists previously would have considered impossible. Still, most observers would agree that the extraordinary experience of the 1990s taught us one lesson: Information technology can create more employment than it eliminates, or at least be employment-neutral. Recessions happen because of people, not machines.
Jeremy Rifkin smiles agreeably at this suggestion. And then he banishes it. “I don’t think that anything [I wrote] was wrong. In fact, a lot of what I talked about is just beginning to happen now.
“When this came out, in 1994, we were in the midst of a jobless recovery, and we’re entering a jobless recovery now, one that is even more serious because it is compounded by a crisis in personal credit.”
Unemployment is higher than people think: Our measures are conservative, and the figures in his country appear low because two per cent of employable American males are in prison. But his most pertinent point involves the credit crunch: When North America’s unsustainable level of personal debt hits the wall, the economy will grind to a halt and unemployment will become rampant.
Most economists agree. But by focusing on these very human economic factors, he is cutting a detour around his bestseller’s thesis: That technology would make employment obsolete. It was, he wrote, “the dawn of the post-market era.” Today, he is reduced to warning of mere market-driven threats.
The book, it should be noted, failed to anticipate the World Wide Web. But now that the bubble it brought is over, says Mr. Rifkin, the end of work is back on track. This, in his view, is not necessarily a bad thing.
Mr. Rifkin is by reputation a professional pessimist; he has written books on the ominous threats caused by air pollution, eating meat and cloning, among other dangers. But most people forget that The End of Work was a wildly optimistic book. He painted mass unemployment as an exciting opportunity for governments to create a volunteer-based “third sector” (an early-1990s buzzword almost unheard today) and to give power to the “social economy.”
“The end of work could also signal the beginning of a great social transformation, a rebirth of the human spirit,” his book concluded. “The future lies in our hands.”
The fact that the future looks nothing like what he promised does not deter Mr. Rifkin’s continuing optimism today. His forthcoming book predicts that the “hydrogen economy” will do for energy what the Internet did for information — and he is eager to position his old book in his new way of thinking.
“Look, each country read into it what they wanted,” he says. “In France and Europe, people read the final three chapters and said it was a very optimistic book, a hopeful book, and they used it to build a revolution in leisure time and the third sector… .
“Here in North America, people call it a work of techno-pessimism because they focus on the phrase ‘the end of work’ and the early chapters where I discussed the crisis in structural unemployment.” North Americans, he believes, will soon come around.
If Mr. Glassman argues that his theory is right, but behind schedule, Mr. Rifkin takes it a step further: He was right all along, but his theory wasn’t quite what people thought.
For him, the appeal of forging big, portentous ideas far outweighs the considerable risk of those ideas being wrong.
As he enthusiastically spells out his economic theories, he cannot resist the temptation to risk being wrong again: “I don’t want to get hung up making predictions, but I’m sure that by the end of this decade we’ll start seeing the five-day, 33-hour work week, and by the time the next generation comes into the labour force we’ll see the four-day work week.”
Mark it on your calendar: In 10 years, it will be time to test Jeremy Rifkin again.
Mr. Goodbye, Nation-state: Kenichi Ohmae
What he was wrong about: That nations were obsolete
When the Soviet Union suddenly collapsed in 1989, it seemed to many that the world had changed forever. The world economy had proved more resilient than even a huge totalitarian state; combined with all the changes that would soon be known as “globalization,” it seemed for a while that nations had become little more than a passing nuisance in the world’s affairs.
The nation-state, the centre of human civilization since it was created by the 1648 Treaty of Westphalia, suddenly looked obsolete. Its imminent demise became one of the outstanding tropes of the last decade.
In the early 1990s, alongside dozens of offerings like The Retreat of the State and The Rise & Decline of the State, at least three books were published with the title The End of the Nation-State. The boldest and most widely circulated of the three was written by Kenichi Ohmae, an American-trained Japanese nuclear engineer and management guru.
In his 1994 bestseller, Mr. Ohmae wrote that “traditional nation-states have become unnatural, even impossible, business units in a global economy.” Within 30 years, the world’s nations would be replaced by 300 city-states and region-states, all subordinate to the multinational economy. He has reitered this theme in half-a-dozen other books, making him king of the “hyperglobalists.” He became one of the world’s most highly booked speakers, and a member of dozens of corporate boards.
So here he is, in Marina Del Rey, Calif., sipping Earl Grey tea with honey at a harbourside restaurant table, after having presented his Japanese passport and passed an elaborate security check at Tokyo airport in order to make his way across the very secure and well-defended borders of Japan and the United States. (He is here to attend one of those corporate board meetings.) Nevertheless, he calmly peers through his elegant wire-rim spectacles, and explains that the nation-state remains steady on its course to extinction.
“The control by central governments over the fate of corporations and individuals is becoming less and less,” says Mr. Ohmae. “The nation-state’s ability to control the flow of investment [is] not sustainable — and the traditional 19th-century nation-state will have to disappear.”
You might be forgiven for thinking that reports of the demise of the nation-state are more than a little exaggerated. It’s fair to point out, as Mr. Ohmae does at length, that supra-national bodies like the European Union have become more important than some of their member nations. Or that regional economies such as Northern Italy or some Chinese provinces have eclipsed national ones.
Or that, in perhaps the most accurate prediction of the hyper-globalists, the United States is currently at war in response to an attack by what political scientists call a “non-state actor,” Al Qaeda (though the war is still rooted in national jurisdictions).
That said, the nation-state seems to be doing just fine, thanks. It is still the world’s basic economic and political unit, and it is the corporations that currently appear to be in demise. When southeast-Asian economies collapsed in 1998, or when U.S. business declined more recently, corporations leaned on the state, not vice versa. Even right-wing governments are nowadays engaging in the sort of “statist” interventions — mostly at the behest of global markets themselves — that Mr. Ohmae said in 1994 would never happen again.
The vast majority of corporations, even the largest, remain thoroughly national in both legal status and culture. The auto industry’s “globalization” of production (though not much of its management) in the 1980s inspired Mr. Ohmae and other authors to make their bold predictions, but very few other industries have followed suit. Even the Internet, supposedly a global force, has spawned corporations that are firmly rooted within individual nations, mostly in the U.S. (as amply demonstrated by the debate Amazon.com has stirred up merely by opening a Canadian branch).
Mr. Ohmae concedes, briefly, that the world has not progressed quite as he anticipated. He leads a movement that has had little success in its efforts to oppose “Japan, Inc.,” the tight bond between the Japanese state and large, often faltering corporations. “I don’t think it’s a surprise to me that governments everywhere are wasting their money and not investing in the future,” he says.
“It doesn’t surprise me, the extent to which this idiosyncrasy and stupidity continues. It’s the magnitude that frustrates me, that irritates me with the political system.”
Soon, though, his optimism returns. Video games, he says, have become a globalized industry. And the chairman of Toyota recently hinted he might like to move his company’s headquarters out of Japan, an unprecedented move for such a major corporation. (And still an unlikely one.)
“The traditional nation-based civilization, which is inherited from grandparents to sons and daughters and grandchildren,” he says, “is being replaced by an even stronger horizontal cultural formation, a borderless world. We watch cable TV and satellite television, and everyone is interested in Tiger Woods and Michael Jordan.”
At last, we are left with this: the global village of sports superstars and video games. It is still a long way from the nation-state sinking into the tar pond of obsolescence, but for Mr. Ohmae, it is enough to keep the dream alive.
Mr. Internet Utopia: John Perry Barlow
What he was wrong about: That the Internet would be a lawless Shangri-La
Unique among our subjects, John Perry Barlow has realized he was wrong. It was a sudden and tragic realization, in the midst of an even greater calamity: As the smoke rose from New York and Washington last September, Mr. Barlow got on his computer and sent an e-mail to his many followers declaring, in characteristically colourful language, that their dream had died.
Mr. Barlow’s dream, one of the key notions of the 1990s, was that cyberspace (as we called the Internet and its attractions back before they became mundane) could really be a space: an autonomous and self-governing community, free from national borders, laws and economic constrictions.
Mr. Barlow was this imaginary community’s Thomas Paine, its Martin Luther — and the strange climate of the decade turned him into a powerful figure who was taken very seriously on Wall Street, in Silicon Valley and in the White House.
“We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth,” he wrote in 1996. “Where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity. Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.”
This was Mr. Barlow’s “Declaration of the Independence of Cyberspace,” often listed as the most reprinted document on the Internet. For a number of years, many serious people built their lives around it, and they paid Mr. Barlow a lot of money to share his ideas.
Today, it is easy to find John Perry Barlow. Even though he has half a dozen North American addresses and even more phone numbers, he does not move around at his old pace. One hot summer morning, he is perched in an armchair in his tiny, messy apartment in New York’s Chinatown, a non-air conditioned bedsit above a store selling lychee nuts and bok choi, its walls lined with photos of Mr. Barlow with his disciple Al Gore (though he has also collaborated with Republicans) and with his three beautiful daughters.
Was he wrong, he is asked? Was Internet hysteria partly his fault? “At the moment, there’s kind of an anti-hysteria hysteria,” he says. But he will admit his mistake. “I have to confess that my notion that the Internet was going to be socially connected, in the old, hippie sense, has not come to be,” he says, his voice dropping to a growl. “I came to feel that I had been promoting a society that was not turning out to my satisfaction.”
Mr. Barlow is dressed in head-to-toe rumpled black, from his cowboy boots to his neckerchief, and he has the golden, weatherbeaten looks of a man who lived as a Montana rancher until he caught the computer bug in the late 1980s. Oh yes, and he also made his living as a lyricist for the Grateful Dead, which helps explain his attachment to utopian communities (and, not incidentally, his instant rapport with Silicon Valley executives, who tended to have an affinity for Jerry Garcia’s timbres).
In the final years of the Internet boom — after writing dozens of manifestos (many in the pages of Wired, the cyber-bible that counted him a key apostle) and giving thousands of well-paid speeches — he came to realize that the Internet had become about as exciting and utopian as the telephone.
Mr. Barlow was devoting most of his energies to the Electronic Frontier Foundation, which promotes freedom of speech on the Internet and, more broadly, the sort of freedom from laws and borders and social pressures that became a rallying cry for his techno-libertarian set. For those ideals, as for so much else, Sept. 11 was a very bad day.
As Mr. Barlow watched the flames, he knew that the Internet would no longer be a libertarian’s dream. “Control freaks will dine on this day for the rest of our lives,” he wrote his friends and followers. “Within a few hours, we will see beginning the most vigorous efforts to end what remains of freedom in America.”
Today, he sits in front of his Macintosh, lights another Marlboro Light, and wonders if it was all worth it. His efforts to defend the copying of information have been shot down by the media corporations, which define much of the Internet’s culture as “theft” — and he blames that on the very libertarian qualities of the Internet for which he once prosletyzed.
“I used to think that the Internet was going to be a great organizing tool. And it isn’t. Because it gives everybody the right to dissent, but individually — it doesn’t give any incentive to collective dissent.”
Mr. Barlow’s life is quieter these days. The speakers’ fees, his main source of income after he sold his ranch, dried up almost completely after Sept. 11, largely due to the temporary collapse of the conference business, and they’ve not really recovered.
He still has social cachet — he answers a phone call from two “very young blondes” he says have “adopted” him and drag him from party to party — but there is a distinct sense that this is a fallow season for big thinkers.
For an exhilarating moment, the Internet seemed like its own world. The end of that moment, Barlow acknowledges today, was partly his own doing.
“The big mistake that I made and a lot of others in my line made was saying that what was happening in the short term was really happening in the long term, and fuelling the business hysteria. I should have been a lot more careful about my pronouncements.”