One-Year Anniversary of Trump
One year of poking and prodding over Donald Trump’s election later, we’ve diagnosed “fake news” and “foreign interference” as dual, possibly life-threatening infections to the whole system of liberal democracy—and we’ve seized upon social media as the vector by which these ills are spread. A majority of us are drawn into it for at least an hour every day. A majority of us get our news out of it. And so “social media”—a phrase that meant nothing barely a decade ago—today is the phrase without which nothing can be explained.
The unaccountable lawlessness that romps through this new medium threatens to drown out the civil discourse upon which democracy depends. So argue many guardians of our present moral and political order, who look poised to tame the most anarchic content through laws or regulations. Last week, executives from Facebook, Google and Twitter were hauled before the U.S. Congress. The message, to paraphrase Senator Dianne Feinstein, was blunt: ‘The platforms you’ve created are being misused. Do something about it. Or we will.’ The standard Mark Zuckerberg defense—that a social media platform should bear no more responsibility for the diseases that travel across it than, say, the Heathrow airport authority bears when travelers carry this season’s flu into or out of London—now sounds naïve.
From Chaos, A New Age?
The reason is we now fear a pandemic. Not so long ago, the European Union was inseparable, Trump was unelectable, globalization was irreversible, science was incontrovertible and even the democratization of China was inevitable. In the wake of so many sudden reversals to society’s settled projects and norms, the knee-jerk impulse to legislate and protect against further downsides from social media is predictable—even sensible.
It might also be dead wrong. History suggests the wiser course may be to let social media run amok. This new medium will work giant, unintended consequences upon society—regardless of our feeble attempts to control it. The faster we get to those consequences, the better off we’ll be.
That is the broadest lesson from the social transformations and botched control measures during the Renaissance advent of print—history’s best analogue to the test that social media presents.
Pre-print, the only “one-to-many” communications medium was oral. And so who said a thing, and how big an audience bore witness to the saying of it, mattered most. In this oral culture, thrones and pulpits held sway over public discourse and ideas. Then Gutenberg’s printing press—which emerged in Mainz in the 1450s and by 1500 had spread across the continent—made the pulpit publicly available. Unordained voices were amplified, and what was said began to have a power all its own.
This transition to print culture helped shift Europe out of the medieval and into the modern era—in part because the print medium was messy and subversive. An obscure Polish astronomer, toiling away in Warmia, shifted Europe’s understanding of reality from an earth-centered to a sun-centered universe. Copernicus accomplished that feat, not through the singular eloquence or blinding reason of his 1543 book, but also because decades worth of almanacs, bestiaries, travelogues, maps of new worlds and detailed diagrams of human anatomy preceded him. Many were lies and superstition. All undermined the medieval conviction that knowledge about the world lay only in ancient wisdom. Similarly, Martin Luther managed to ignite his Protestant Reformation in 1517, not only because his 95 Theses On Indulgences spread faster than the Catholic Church could tear them up, but also because his theses landed on a print culture that was ready to doubt the Church’s monopoly on truth. (How could the Church claim to be infallible, when every printed Bible since Gutenberg’s had textual differences?)
At its advent, authorities welcomed print as an instrument of humanity’s ascendance. In 1470, the Vatican’s own librarian reflected that ‘One can hardly report inventions of like importance for mankind, whether in ancient or modern times.’ But as the unwanted consequences mounted—the spreading of lies, the fanning of zealotry and bigotry, the deepening of linguistic and national divides—that official welcome was replaced by attempts to halt the corrosion of society’s moral and political order. They all backfired.
In the extreme case, the Islamic Ottoman Empire banned the printing press early and outright, in the 1480s—and missed out on the Scientific Revolution. Within Europe, Catholic countries began to censor Copernicus, Galileo and other scholars whose discoveries undermined Church dogma—and the weight of ‘scientific’ printing shifted toward Protestant countries where publishers faced fewer risks.
And everywhere, the growing insistence of church or state to judge which content was fit for print inspired new philosophies that ultimately reinvented both. By the mid-1600s, resistance to state control by printers and authors had established a new political and artistic principle in society—the “freedom of the press”. The glaring contrast between the absolute truths claimed by kings and popes, and the accumulating evidence for alternatives, created the public space in which philosophers like Thomas Hobbes, John Locke and Immanuel Kant challenged the very nature of monarchy and religion. Those investigations, widely published and debated throughout Europe, established concepts like free markets and constitutional democracy, and shaped new attitudes of skepticism and secularism.
Everything That Breaks…Needed Fixing
The print medium made “one-to-many” communications common. Social media makes “many-to-many” society’s new norm. Given the history of the former, who dares bet against the arrival of unintended, unforeseeable transformations powered by the latter?
These consequences are indeed coming. Let’s get to them as quickly as possible. Rather than repeat the mistake of medieval church and state, by pouring energies into a fraught, futile effort to control this new medium, let’s focus our attention instead upon fixing the flaws in our society as it reveals—or causes—them.
Social media has already produced a couple big ones.
First, it’s revealed that we’re not honest enough about the gap between our public politics and our private beliefs. Did Russian Facebook posts win Trump his presidency? No. As Tuesday’s off-year elections demonstrated, they don’t determine down-ballot races, either. No matter how expertly Russian bots nudged Americans’ voting behavior, their algorithms produced a tiny fraction of the 10 billion political likes, shares and comments that Facebook logged during the 2016 election cycle. But skim through the messages generated by those algorithms: they are a cold, clinical diagnosis of the views Americans harbor in the privacy of the voting booth. We should all invite these tough insights, courtesy of the Russian taxpayers, because democratic discourse is a polite sham unless it begins with that honesty.
Second, social media is triggering a regress to oral culture. In print culture, the relative scarcity of access to a press gave each published title a certain gravitas. And over 500 years, we refined skills of critical inquiry to separate the great from the garbage. Now everyone has access to a press, all words are published words, and the skills we had refined to sort them are obsolete. We find ourselves swayed mainly by who says the words—again. That is why, in the US for example, names like Oprah Winfrey and Tom Hanks are now freely floated as presidential contenders—not because they have demonstrated interest in public service, but because of our demonstrated interest in them.
Social media may offer a new means for foreign interference or false idolatry to harm our democracy, but these flaws offer the opportunity. To try to censor or shield ourselves from their consequences is dangerous. We risk deluding ourselves into thinking our society is safe again, when instead we’re ducking the real challenges we face. We must depend instead upon our ability to sort through them.
The upside of letting social media run free
In doing so, we should summon the courage not just to tweak, but reinvent liberal democracy.
At the advent of print, a moral and political order that felt finished was, in hindsight, medieval. It gave way to modernity.
It’s a safe bet that 2017 isn’t the endpoint of history either, and that society will be ordered quite differently 500 years from now. With our physical technologies, we are already embarking on voyages of discovery to disrupt the present way of things: in artificial intelligence, autonomous robotics, genetic modification, quantum computing and additive manufacturing. But our social technologies—our legal, tax and education systems, asset and labour markets, public values and private ethics—are evolving slowly in response. Too slowly, given that society’s prevailing trend is to grow apart, not together.
If the history of the adoption of print is any guide, our new many-to-many medium will be the catalyst we need to accelerate social evolution. The printing press was a force for secularism and skepticism even as it enabled a century of religious war. Social media is a force for inclusion even as it fosters divisions. To take the most recent example: since mid-October, when allegations of sexual harassment against Hollywood mogul Harvey Weinstein were published, the #metoo hashtag has been published over 80 million times, victims of harassment have found a supportive social space in which grievances are taken seriously, and workplace norms are suddenly, markedly shifting. In China—a place where social media has zero power to seize the agenda—no national conversation on gender equity or sexual harassment is taking place right now. The Communist Party of China, for all its proud rhetoric of meritocracy, unveiled a new leadership team of precisely seven men on October 25. By contrast, in every advanced democracy, an unscheduled reality check is taking place. That bodes well for our future. In an aging world, prosperity depends upon bringing every marginalized talent to bear.
Let social media run amok. Everything it breaks, needed fixing. Everything else needs a renaissance.
Last week, executives from Facebook, Google and Twitter appeared together before the U.S. Congress. Senators grilled them about Russia’s ongoing use of their social media platforms to influence U.S. public opinion—and, last year, to nudge the outcome of the 2016 presidential election. Prior to the hearing, Facebook admitted that over 140 million Americans were exposed to Russia-sponsored content during the presidential campaign season.
The advent of social media, and the speed with which it has displaced other media as citizens’ go-to source for news and opinion, forces every democratic society around the world to ask itself two tough questions:
- How do we maintain the free and open discourse upon which democracy depends, without it being drowned in unaccountable lawlessness (“fake news”) or divided into tribes that don’t respect each other?
- How do we maintain the sovereignty of our own citizens, when our domestic discourse easily admits foreign interference?
Artificial intelligence, whose capabilities are strengthening daily, makes these questions doubly urgent. With growing success, any well-financed interest group can develop algorithms to figure out which citizens to target with which messages in order to influence their political choice-making.
What can, or should, democracies do to address fake news and foreign interference on social media platforms? Should governments try to regulate away these threats to democratic discourse, by treating these platforms as media companies? (Under a new law in Germany, social media companies can be fined up to €50 million if they fail to take down illegal hate speech from their sites within 24 hours of being notified. The U.S. is contemplating new laws that would require big social media platforms to track and disclose political ad-buyers—just as TV and radio must do already.)
Or should we leave the Facebooks and Twitters of the world free to self-regulate—effectively, accepting their argument that they are not media companies, but rather neutral technology platforms with little responsibility for the content they host? Facebook, in an effort to show governments that self-regulation can work, has begun to pay fact-checkers to weed out some of the fake news from among the most popular stories on its platforms. But in a world of self-regulating platforms, the ultimate responsibility for sorting the wheat from the chaff would remain with citizen-users. (This week, Italy began rolling out a “fake news awareness” class across 8,000 high schools to help equip the country’s next generation of adults with the skills to do just that.)
Fake news and foreign influence put democracy itself at risk. With stakes so high, it’s no wonder that governments want to step in to limit the downside risks of social media.
The power of the unintended
But history suggests this thinking may be backwards. The unintended upside of letting social media run wild may far exceed the unintended downside.
That is the lesson from the advent of print—which, as I’ve argued before, is history’s best analogue to the advent of the internet and social media. The former was history’s first true “one-to-many” communications medium. The latter is history’s first true “many-to-many” medium.
The arrival of a “one-to-many” era of communications was dominated by unintended, unforeseeable consequences. Print made it possible for one man, Martin Luther, to ignite a Protestant Reformation that broke the Catholic Church’s monopoly on spiritual truth in 16th-century Europe. The printing press accomplished that feat, not only by spreading Luther’s famous 95 Theses far and wide, but by spreading different versions of the Bible, which eroded public belief in the existence of a single infallible text.
Print made it possible for another man, Nicolaus Copernicus, to shift Europe’s understanding of reality from an earth-centered to a sun-centered universe. Print accomplished that feat, not simply by publishing widely his book, On the Revolutions of the Heavenly Spheres. For decades leading up to Copernicus’ publication, print changed the cadence of knowledge creation and knowledge sharing in Europe, so that, more and more, people began to trust in the accumulation of new discoveries more than the wisdom handed down from ancients.
The adoption of this new, print medium was a chaotic experience, full of contradictions. Print did as much to perpetuate blatant errors as it did to spread enlightened truth. It fanned religious extremism and bigotry while fostering a new concern for the poor. It deepened linguistic and national divides, but also created a more cosmopolitan “commonwealth of learning” among artists and scholars. The same technology that helped begin a hundred years of religious war across the continent also helped fuel individuality and usher in the Scientific Revolution.
Many states tried to control print’s impact upon society’s moral and political order. That control came at a price. On the extreme, the Islamic Ottoman Empire banned the printing press outright in 1485, just a few decades after print’s emergence in Europe—and maintained the ban until the 19th century. The Islamic world had been the global seat of scientific progress from 750 to 1100 AD, but lost that seat to Europe’s Scientific Revolution.
Within Europe, censorship regimes tried to suppress “dangerous” aspects of the print medium. Catholic countries censored Copernicus, Galileo and other scientists whose discoveries ran against dogma heavily. In 1543, the Catholic Church decreed that no book could be printed or sold without its permission. In 1563, Charles IX of France decreed that nothing could be printed without the king’s permission, either. Inquisitions, imprisonments and public burnings enforced those decrees through fear.
But ultimately, they were self-defeating. Catholic bans pushed scientific printing toward Protestant countries where the risks of publication were smaller. Resistance to state control, by printers and authors, helped establish the “freedom of the press” as a new political and artistic principle by the mid-1600s. And the contrast between the stubborn efforts of Church and state to monopolize truth and the emerging evidence of alternative truths created a space for philosophers such as Thomas Hobbes, John Locke and Immanuel Kant to question the nature of religion and absolute monarchy. Those investigations, widely published and debated throughout Europe, introduced concepts of free-market capitalism and democracy and shaped new attitudes of skepticism, secularism and individualism. The American and French Revolutions followed not long after.
The history of print offers three principles to inform present debates over the regulation of our new, many-to-many medium. First, attempts to control society’s use of social media may prove self-defeating, by inspiring new philosophies that undermine existing institutions. Second, the unintended consequences of social media will likely dominate any results we may intend to bring about. The greatest gains will go to those societies that explore the upside of the medium, not those that protect against the downside. And finally, the adoption of this new medium is a long term enterprise. If unintended consequences are going to dominate our future, it would be a good idea to get to those consequences as quickly as possible—and hasten our adaptation to them.
For this week’s letter, I want to share with you an article that I just co-published with my good friend and intellectual maestro, Massimo Portincaso. Massimo is a partner at my old stomping grounds, The Boston Consulting Group, and these days, he runs BCG’s global marketing and communications. So, he does lots of intellectual outreach to alumni like moi. He also posts his own thought-pieces on blockchain, AI and other stuff.
Our article has got a business focus, but I think everyone can draw wisdom from it. You’ll find many of the broad themes I touch on familiar…
In an Age of Discovery, Executives Must Become Map-Makers
The world always makes sense. But it doesn’t always make sense to us. What we see depends on how we look at it. Surprise, a constant theme nowadays in the C-suite, is a sign that whatever perspective we’ve been using to see the world no longer shows us things as they really are.
It is when the world stops making sense to us that we need a new map of the world, a new narrative that better represents reality. But coming up with one, and making it stick, is not easy. Consider this: In the early 1500s, Copernicus taught us that the Earth revolves around the sun — not the other way around. We’ve lived with this insight for 500 years. Why, then, do we still gather at, say, the Valentino Pier in Brooklyn to watch the “sunset”?
The reality — as any picture of the same moment from space would make clear — is “earthspin.” We, not the sun, are traveling across the sky to turn day into night. But that simple, centuries-old truth hasn’t yet penetrated our language. It hasn’t yet penetrated our thinking. Every “sunrise” and “sunset” should be a powerful reminder that our everyday narratives can warp and distort our ability to see things as they really are.
Our “maps” of the world exist mainly in the language, or narratives, we use to frame concepts and issues. Words are just the shared mental maps we use to navigate through the world. Executives steeped in classic business strategy may be skeptical of the power of mental maps, or narratives, to shape our understanding of industries, problems, or priorities. But consider how the multiplication of information has diminished leaders’ capacity to articulate the world to themselves, often forcing them to become consumers of other people’s narratives. For example, we may talk about “disruption” in our own industries because that is the narrative being passed around — but what we mean when we use it remains fuzzy to ourselves and others. So, too, are the actions that follow.
Map-making (or map-remaking) is an essential activity when steering an organization during times of rapid change. In such periods, executives must regularly interrogate and update the narratives by which the organization navigates. If they do not, the maps that once guided the organization instead trap it in outdated worldviews. They conceal and distort, rather than reveal, the paths ahead.
If, however, executives do curate the corporate narrative and update their mental maps, their organizations will be better equipped to evolve along with the fast-changing world around them. Such corporate map-making aligns employees’ judgment and intuitions more closely with external reality in ways that generate better questions and decision making; it helps identify deeply buried mismatches between the organization and its environment; and it can powerfully transform employees’ shared behaviors.
Renaissance Wisdom on Mapping New Worlds
In other periods of rapid change, the ability to create new maps (that is, new narratives) separated those who adapted successfully to — and shaped — events from those who were paralyzed by the pace of change.
Take the Renaissance, an analogous moment of transformation driven by “globalization” (the voyages of discovery) and “digitization” (Gutenberg’s printing press). How people saw the present — their narrative — drove their adaptations and led their transformations. Let’s look at three revised narratives that helped define that time of discovery and change.
From Flat Maps to Globes. The first successful Atlantic empire-builders, Spain and Portugal, switched from modeling the world as flat to modeling it as spherical not because they suddenly discovered that the world was round (Europe had known that since the time of Ancient Greece), but to better visualize crucial business questions. The oceans to Europe’s east and west had both been proven navigable, and in 1494 the Treaty of Tordesillas drew a single vertical line (through what is now Brazil) to divide the lands beyond Europe between the two countries. All that lay to the east of the line was Portugal’s; the lands to the west were Spain’s. But in whose territory did the economically significant Spice Islands (present-day Indonesia, on the other side of the globe) lie? And which way, east or west, was the shortest route to getting there? Visualizing the Earth as a sphere helped clarify — and answer — those strategic questions.
From Sacred to Inspired Art. Medieval art was flat and formulaic. Its main purpose was religious — to tell a sacred story. Plagiarism was common practice; innovation was irreverent. The invention of linear perspective (showing depth on a flat canvas by drawing far-away objects smaller), plus new knowledge in anatomy and natural science, were absent from European art until Brunelleschi, Michelangelo, da Vinci, and others validated them within a new narrative: the artist’s job was to capture a fragment of God’s creation as he saw it. These artists became famous for works that presented increasingly lifelike, original, and secular visions of the world.
From Luxury to Mass Market. Johannes Gutenberg, who invented the printing press in the 1450s, ended life bankrupt. Why? Because books were a luxury — useful to few, owned by even fewer — and the economics of Gutenberg’s printing press made sense only in large-volume runs. Gutenberg struggled to find books that demanded mass production. But over time, the new printing technology helped change people’s ideas about books and the purpose they could serve. By the 1520s, when Martin Luther directed all laypeople to read the Bible as a way to care for their own souls, books were becoming the new medium in which ideas reached mass audiences. Indeed, the Bible has since been printed 5 billion to 6 billion times, and counting.
It’s Time to Update Our Narratives
In order to keep pace with a rapidly changing world, Europeans during the Renaissance completely remade many of their mental maps. Today, many of ours need remaking, too. Here are three examples of outdated narratives/maps in wide use today whose revision could accelerate organizations’ ability to adapt and unleash creativity.
From Infrastructure to Interstructure. What is infrastructure? Literally, it is the structure that lies below. The word “infrastructure” in English dates back to the 1880s, to the second industrial revolution (that is, the advent of mass manufacturing). The way the term has long been used envisages an industry that is stable, permanent, and fixed — something that underlies the busy social and economic activity that all takes place atop it. That was an accurate narrative, once. The idea was that the builders/operators/producers of mass enablers (like electricity grids) were separated from the users.
But that is the opposite of the future being articulated today — by executives in electricity, water, transport, and other industries — of business models that increasingly operate within and between all manner of transaction. Increasingly, infrastructure is being re-conceived as a platform, which — like platforms in the digital economy — blurs the division between producers and users, and enables uses that may be completely unanticipated by the network builders. If all that elected officials, consumers, or employees know of a given industry is that it involves “infrastructure,” then they lack the awareness to be a good partner in these transformations.
“Inter-structure” more closely captures the models that are emerging in these industries. Smart electrical grids enable businesses and individuals to create, trade, and arbitrage electricity with their own generation and storage assets attached to the network. Owners of rights-of-way, from water utilities to railway companies, may enable flows of autonomous vehicles and drones along private transportation routes that do not conflict with public traffic. Owners of physical facilities of all kinds, from parking lots to warehouses to attics, will enable autonomous material flows by supplying staging sites and recharging sites.
From Mechanical to Biological Thinking. As Danny Hillis brilliantly describes in the Journal of Design and Science, “The Enlightenment is dead, long live the Entanglement.” The Age of Enlightenment was characterized by linearity and predictability. It was a world where causal relationships were apparent, Moore’s law had not yet accelerated the pace of change, and economic and social systems were not yet intricately intertwined. But now, as a result of technological and scientific advances and the rise of globalization, the world consists of several big and small complex adaptive systems, which are highly entangled. Whereas we used to be able to use a narrative of linearity and mechanics to explain the world, we now need a narrative inspired by biological and other natural systems. Biological thinking is not linear. Instead, as Martin Reeves has written, it is messy. It focuses on experimentation rather than managing a process to produce a certain effect.
From Automation to Augmentation. Most corporate and policy research regarding artificial intelligence and the “future of work” is centered on automation — the replacement of human labor and cognition with machines. Multiple studies report some variation of the same narrative: about half of all jobs in advanced economies may be automated away by 2050, if not earlier.
This stark human-versus-machine dichotomy gives rise to a number of blind spots and neglects important dimensions, such as the spread of complex adaptive systems and the network effects caused by their entanglement. Most important, it skips the most promising opportunity space for business and for every sector of society: the human-machine interface.
A narrative of augmentation, instead of automation, invites business leaders, policy makers, researchers, and the labor force to pay much more attention to this middle space. Companies and society need to create a narrative that focuses on the potential of AI to switch the scale of reference for several tasks, often by several orders of magnitude. A good example is personalization. Brands that leverage AI and proprietary data can move from tens or hundreds to hundreds of thousands of customer segments and see revenue increase by 6% to 10%, two to three times faster than those that don’t harness this potential.
Amazon is a good example of AI as a source of augmentation rather than just automation. The company, one of the heaviest users of AI and of robots (in its fulfillment centers, the number of robots grew from 1,400 in 2014 to 45,000 in 2016), more than doubled its workforce in the past three years and expects to hire another 100,000 workers in the coming year (many of them in fulfillment centers).
The point is that we need a narrative that encourages us to generate more with available (human) resources by leveraging AI and technology, not one that looks at a finite game of optimizing away labor costs wherever they exist.
The augmentation narrative is not limited to products and processes; it also affects professions and management. Just as what it means to be a doctor is going to be reshaped by access to millions of records and machine learning, what it means to be a manager and run an organization will change significantly. The current trend to decentralize decisions will be fundamentally redefined and accelerated as decisions are increasingly supported by AI and data, “augmenting” decision makers and allowing for new management tools and new organizational structures.
Cartography as Competitive Imperative
Much has already been written about the overwhelming amount of data and information now available to executives. What is often missing in this discussion is that the main challenge does not lie in having too much information (our brains are always flooded with more information than we can process), but in the information overflow that occurs when we lack an apt framework to make the flood meaningful.
Map-making is an essential, but mostly overlooked, part of adapting to rapid change. As the example with New York at sunset shows us, narrative and language can indeed trap us in outdated views of the world. We must gain awareness of our mental maps, and redraw those that need redrawing, if we want the world to make sense to us again. It’s a corporate leadership imperative, and a societal one.
With 73% of CEOs seeing rapid technological change as one of their key issues (up from 64% last year), it’s also a competitive imperative. Conscious map-making helps us to adapt to change, but it also drives it. Five hundred years after the Renaissance, we remember Columbus, Michelangelo, Brunelleschi, da Vinci, and others because their maps defined the terrain in which their age explored. Today’s voyages of discovery are likewise unveiling a new world to us. New maps, new narratives, will emerge and will define how we understand it. If we are not writing them, someone else is…
Three links to help navigate the now:
- Read about the next flu pandemic in The Daily Mail. Can we see the next shock…before it hits? The Daily Mail interviewed me for this feature piece. A bit of scare-mongering (it is the Daily Mail, after all), but my point is—like with my Brexit and Trump predictions last year—that we need to start taking these risks seriously. They’re more likely than we’ve been lulled into believing.
- Read about Our New Robot Overlords in The New Yorker. This is a long piece about robots taking jobs away. Instead of statistics, it tells the stories. Beautifully done.
- Read “How do we solve a problem like the future?”, an essay by prominent Aussie tech journalist, Sandy Plunkett. Over 25 years in Silicon Valley, Sandy befriended every generation of tech leaders, from Bill Gates to Mark Zuckerberg. Now back in her native Australia, she’s helping Aussies craft their role in the 21st century.
As you know, I like to talk about our collective need to “make new maps” to better navigate this age we live. Another way to think about this same need is to question the fitness of our “social technologies”.
An economist and historian at Columbia University, Richard Nelson, points out that we humans employ two types of technology: physical technologies (which is all the stuff we normally think of when we hear “technology”—things like steam engines and semiconductors); but also social technologies, which are the ways we organize ourselves to do things—ways like settled agriculture, the rule of law, money…and democracy.
These two classes of technology evolve with each other. This co-evolution is easy to see: just look at how the list of subjects taught in universities changes over time. In an earlier letter about artificial intelligence, I gave the example of mass manufacturing—a new physical technology that challenged society to deal with a new scale and complexity of inputs, outputs and money flows. Our accounting systems couldn’t cope, and so in response, we evolved a new social technology—finance—to help us to help society to cope with, and more fully leverage the possibilities of these mass production systems.
But sometimes this co-evolution falls badly out of sync. Lately, we’ve been innovating our physical technologies very rapidly (or, as I argued in Age of Discovery, at a revolutionary pace reminiscent of the Renaissance). But our social technologies—our legal frameworks, tax systems, education systems, political systems and so on—are adapting at an evolutionary pace. That is a very simple, intuitive way to explain the widening anxiety that has been observed across the world’s advanced countries today.
One branch of social technology where the need for innovation is urgent, but not urgently pursued, is in our politics—and specifically, in our democracy.
Democracy in its modern form has been with us since Magna Carta, an agreement signed in 1215 between King John of England and his feudal barons, which recognized that the powers of the king had limits. (As a pure aside, I’ve seen one of the few surviving copies of Magna Carta several times during my years at Oxford University. It’s gorgeous. Magna Carta is often trotted out when VIPs come to town. When it’s not on display, it’s kept deep underground in a nuclear bomb-proof shelter, along with a Gutenberg Bible and other precious artifacts.)
Magna Carta, the American Declaration of Independence, the French Revolution and other seminal moments set the basic features that define all liberal democracies today. And the most basic feature of this social technology called “democracy” is the idea that (A) we choose our representatives, who are then (B) accountable to us for how well they protect, and advance, our well-being.
A still works. B makes less sense with every passing day.
The basic problem is that 21st-century humanity is tangled together now—and not, as people say, “connected”. The distinction is crucial to understanding what ails democracy today. In a tangled world, cause-and-effect are hard to see; goods and bads flow globally; and we can’t disentangle ourselves from risks and shocks that originate elsewhere.
In reality, humanity’s well-being is all tangled. But in our politics, we gather in discrete national groupings to choose people to advance our group’s well-being. The people we nominate (our elected representatives) make promises to our group: to grow the economy, to create jobs, to keep us safe, to clean up the environment. But given our tangled reality, in many situations they lack the power to deliver.
Take, for example, the US president. Donald Trump’s presidency is an example of…well, it’s an example of many things…but it is also an example of the widening gap between the promises our democratically elected leaders make to us and their capacity to fulfil them.
Across the world’s democracies, the US president is arguably the elected representatives with the most power to shape the world according to the interests of his national group. But that power is proving illusory. Take just one big example: jobs, specifically jobs in the US coal industry. Donald Trump got elected in part because he promised voters in key states to save the American coal industry. And yet, since he was elected, no new coal plants have been announced or opened in the US. Instead, 10 more coal plants have announced they will shut down. Trump is powerless to deliver on his promise to create new jobs for American coal miners, because scientific consensus about climate change, because advances in clean energy and electric vehicles, and other broad trends that entangle the coal industry overpower any actions one country might take on the issue.
What aspects of public well-being can our elected officials deliver on? The list keeps shrinking. Public safety and security? Such promises are challenged by the online radicalization of our own neighbours, or by anonymous hacks to steal our data and identities.
Public health? It’s only as secure as the weakest link in the world’s quarantine capabilities. Remember SARS? We were all lucky that the easy-to-spread pandemic hit Hong Kong, Beijing, Singapore and Toronto first—all cities with very strong public health authorities. Remember Ebola? The public health authorities in those West African countries where it struck were quickly overwhelmed. The rest of the world was just lucky that Ebola, unlike SARS, can’t spread through the air.
This coming winter’s flu season is likely to be severe across the northern hemisphere—not because public health officials are asleep at the wheel, but because southern countries like Australia, whose winter ended a couple weeks ago, have just endured one of their toughest flu seasons on record. And right now, those strains of flu virus are spreading globally through our airports.
Obviously, many determinants of our well-being remain local. The world isn’t flat; it’s more mountainous than ever. And local government policy (say, over housing prices, public transit and migration) does a lot to determine how high its residents can reach.
But many of the biggest determinants of our well-being are now beyond the power of our own political community to decide: environmental change, pandemics, tax avoidance, industrial growth and collapse, commodity prices, technological change, personal privacy and the integrity of our public discourse.
What, then, is the contract we are making with our political leaders when we elect them? Across the democratic world, trust in political institutions fell sharply after the global financial crisis. Did our politicians betray our trust? Or did we make the mistake, by believing that financial stability was within their power to maintain?
Calling all innovators
Now is the next critical moment in the long story of the social technology we call “democracy”. To help us face many of the biggest threats to our well-being, we might need to give our political leaders more trust, not less. More authority, not less. But we can’t do that, because it demands a trust we don’t feel.
Can we innovate this social technology, democracy, to suit the new world we find ourselves in? Tune in next week…
Three links to help navigate the now:
- Follow Geoff Mulgan. He’s the Chief Exec of Nesta, one of the world’s biggest policy innovation labs. Also a writer, social entrepreneur and policy geek. Everything he retweets is a “must-read”.
- Watch a brilliant, one-hour debate hosted by The Guardian between Jeffrey Sachs, David Miliband and Ngaire Woods on the question, “Are we facing a crisis of democracy?” (Or, if you only have two minutes, here’s a brief highlight and another.)
- Read Eileen Donahoe’s essay on whether government, or the big tech companies, should be in charge of protecting our public discourse from fake news. It begins, “Democracies face an existential threat: information is being weaponized against them with digital tools…” (Eileen works on global digital policy at Stanford.)
Society does not serve the economy. The economy serves society.
That seems obvious, and yet core to redrawing the maps that guide us toward “prosperity”.
The first Renaissance proved powerfully that society’s progress is a bigger concept than economic progress. Historians look back on the Renaissance century, from 1450 to 1550, as Europe’s break from the medieval ages to the early modern era. In the 1450s, Europe lagged the rest of the world on many measures of civilizational progress (e.g. science, exploration, navigation, iron- and steel-making, weaponry, agriculture, textiles and timekeeping), but by 1550 Europe was leapfrogging every other region, and boasted more organizational and energy resources than any previous civilization on earth.
But look back at the same time period through the lens of economic growth, and, according to economic historians, nothing happened. Western civilization achieved some historic transitions: from local to intercontinental Empire, or from looking for truth in Revelation to finding answers in present-day observation. (That philosophical shift led ultimately to the scientific revolution and the Enlightenment.) But “GDP per capita” barely budged. Economic growth statistics failed to capture these monumental shifts, for the simple reason that the economy is only one dimension of society. Some shifts transcend the economic dimension.
Our notions of “progress” and “prosperity” must do the same.
To that end, Bhutan famously measures national progress according to its Gross National Happiness index. The GNH a home-grown substitute to GDP that combines economic prosperity, social cohesion and environmental sustainability “in search of a more balanced society”. We’ve all heard about Bhutan’s happiness index in the media at one time or another. It’s worth taking two minutes out of our busy lives to browse the specific dimensions that Bhutan’s government measures in its report of public “happiness”, such as: emotional balance, spirituality, “healthy days” (as opposed to sick days), artisan skills (like painting and weaving), hours of sleep, victimhood (by crime, abuse or other), pollution levels and housing quality. Sounds sensible. Every household is classified on a spectrum from “Unhappy” to “Deeply Happy”.
My friends at the Boston Consulting Group, where I once worked, have developed a substitute to GDP called SEDA—the Sustainable Economic Development Assessment—as another way to shift the national goal from “wealth” (i.e., GDP) to “well-being”. It considers a narrower, but perhaps more familiar, set of indicators: economic factors like GDP, unemployment and inequality; public infrastructure, health, education and the like. Like my letter last week, SEDA makes the point that, while tracking economic growth makes sense, focusing on growth alone misses the point. Converting economic growth into general well-being is not automatic, and it occurs very differently from country to country.
In China, the ruling Communist Party, which has made economic growth its main focus for the past 30 years, looks set to reform its Constitution at its once-every-five-years powwow in mid-October to enshrine a winder notion of prosperity. To quote the official translation of the official draft text from the official propaganda bureau: “The people-oriented development thought should be implemented to solve the conspicuous problems faced by the country and…to promote balanced economic, political, cultural, social, and ecological progress.”
The Wake Up Foundation, run by a former chief editor of The Economist, thinks about prosperity less in terms of progress and growth, and more in terms of resilience in a time of flourishing risk. It ranks 35 rich democracies according to how prepared they seem to “deal with the big forces we all know will pummel us over the next decades”. Their ranking considers demography, education, innovation, globalization and institutional strength. They rank Switzerland 1st, my own Canada 12th and the USA 23rd…
Even the pope is taking part in this conversation. In his second encyclical, Laudato Si’ (2015), Pope Francis writes:
A technological and economic development which does not leave in its wake a better world and a higher quality of life cannot be considered progress. Frequently, in fact, people’s quality of life actually diminishes—by the deterioration of the environment, the low quality of food or the depletion of resources—in the midst of economic growth.
From macro to micro
My plan was to switch gears at this point, away from the big-picture macro question of “What does prosperity mean?”, and start exploring the micro question of “What does it look like to pursue something other than growth within the economy?”
I was going to share some of the thinking from the Ellen MacArthur Foundation about shifting from a linear to a circular economy. I was going to share some examples of startup companies that aim for long-term, stable profits rather than short-term, lucrative exits—like Meetup.org or Pando Daily.
But I’ve hit my word limit again, so…tune in next week.
Three more links to help navigate the now:
- Listen to Oxford psychiatrist Ian McGilchrist. In this interview on Australian national radio, he helps us to make better sense of the world by using our left- and right-brain—or, as he calls them, the Master and his Emissary.
- Follow Max Tegmark. He’s a physicist at MIT and tries to make the issues presented by Artificial Intelligence accessible to everyone, so that we can all take part in the conversation about it.
- Watch a clip about Being Abroad in Japan, by my friend and professional YouTuber Chris Broad. We met on a long flight from London to Singapore earlier this year. In one of his latest clips (they’re all short), he lives through the experience of being awoken by a North Korean missile flying overhead. Ah, the times we live in…
This year is the 500th anniversary of Thomas More’s Utopia, a book that, in the midst of the political, social and technological upheavals of the first Renaissance, imagined a radically different society. “Utopia” literally means “no-place”, and over time the word has come to mean a fantasy land where all our problems have been solved.
I have argued elsewhere that the 21st century can be humanity’s best. This can be the century when we solve poverty and disease, and flip the human condition from scarcity to abundance—for all.
But I doubt that we will reach that utopia by following the same economic map that guides us today. In too many contexts, we concentrate faster than we distribute. We deplete faster than we regenerate. Society serves the economy, when surely it should be the other way around.
A new map is needed to navigate this new economic world of global markets, global finance, digital monopolies, automation and aging workforces. And it begins with rethinking our destination, “growth”.
Economic growth matters
Today’s economic map aims for a place marked “growth”.
As destinations go, economic growth matters. Bigly. If we live in the “developing” world—say, in Nambia—our community has a long list of development needs: physical infrastructure, like roads, water systems, power systems, sewage, telecom and ports; social infrastructure, like schools and hospitals; and government infrastructure like courts, police, statistics agencies, public health bureaus and all the bureaucracies that help hold society together. All these things make life better, all these things cost money, and the faster we grow our economy the faster we can afford to build (and maintain) them.
China is the headline example of how sustained economic growth can end poverty—and, in the space of two generations, transform a society whose main economic output was cheap manufactures into one that is poised to lead the way in new technologies like AI and quantum computing.
Economic growth is an important destination for “developed” countries, too. First, because every developed country has a lot of developing left to do (in my home province of Saskatchewan, Canada, one in four children still live in poverty). Second, because our domestic politics gets uglier and uglier the closer we get to the opposite destination (i.e., economic stagnation).
To see why, look first at a fast-growing country—again, take China. China’s economy is growing at a rate of about 6.5% this year. All else being equal, that means the Chinese government’s total tax revenues will grow about 6.5%, too. That means the government can afford to spend on every stakeholder in society this year the same amount that it spent on them last year, and has a big pot of totally new money to dish out however it wants. No wonder people are happy with their government. No hard choices.
But in a slow-growth country (which is to say, all the world’s advanced economies), if government wants to spend more money somewhere, it either needs to: (a) raise taxes or (b) spend less money somewhere else. Either way, the process creates winners and losers. Hard choices. Bitter politics, and bitter partisan divides. (Sound familiar?)
So economic growth matters. Our problem is: it matters too much to us. The environmental movement has been hammering this point into all of our heads for more than 50 years. Economic growth at the expense of our climate, of fresh water, of fresh air, of nonrenewable resources or of the global commons like oceans and fisheries and forests doesn’t move civilization forward in any long-term sense.
The bad consequences of our growth addiction show up in the short-term, too. We all saw this, spectacularly, in the finance industry during the global financial crisis. An industry pursued growth, but broke the ecosystem upon which growth depends.
We are seeing this addiction again today—this time, in the tech industry. From Facebook to Uber to AirBnb and Amazon, tech businesses are rushing to build “platform monopolies”. They, and the venture capital that funds them, believe that “network effects” within our digital society make these games “winner-take-all”. And so the emphasis across the tech startup sector is almost exclusively on growth. Society can sort out the risks later—whether it’s how to protect domestic political discourse from foreign interference, how to protect workers and consumers in the gig economy, or how to usefully re-employ whole labor markets that have been displaced by AI and automation.
We are naïve if we think that these risks will sort themselves out. They won’t.
There is good evidence, for example, that within developed countries “the economy” is now splitting into two: the one, a “dynamic sector” of high-value industries where AI, robotics and other technologies are creating tremendous gains in wealth and productivity; and the other, a “stagnant sector” of low-value industries with low job security and low pay.
Now, depending on your political stripes, conditions in the stagnant sector are either “deeply worrying” or “the way the world works”.
But what we should all be worried by evidence that these conditions are beginning to drag down the overall economy. (For econ-geeks, here’s the paper. For the rest of us: It’s like we’re all at a house party playing Monopoly. As the game goes on, more and more of us go bankrupt. The crowd sitting bored on the sofa, waiting for the game to end, is getting bigger, while the group still having fun around the table is getting smaller and smaller. The whole party’s getting dull. But then, the rules aren’t designed to keep everyone in play. They’re designed to declare a winner.)
Question our destination
Again and again, we steer for growth with a single-mindedness that would make Ahab proud—and again and again, we risk wrecking our ship en route.
So it’s time to ask ourselves some utopian questions: Is our economy aimed at the right destination? And if that destination’s not “growth”, then where?
Three links to help navigate the now
- Sign up for monthly updates from the Institute for New Economic Thinking. INET was founded by Nobel economists after the financial crisis with a mission is to make economics serve society. The best resource on the web for…yep, you guessed it…new economic thinking.
- Read Douglas Rushkoff’s book, Throwing Rocks at the Google Bus. Doug and I shared the stage at a summit on the “circular economy” this summer. Doug’s the one who coined the phrase “social media”, and his latest book shakes popular faith in today’s tech sector.
- Follow Adair Turner. He’s a former financial regulator (in the UK), and the one banker who just makes sense. His last book, Between Debt and the Devil, is wonkish but…just makes sense.
I hope everyone had a great summer (or winter, depending on your hemisphere). It was a busy few months of writing, researching, travel, reflecting and speaking for me.
A common narrative nowadays is that we are flooded—overwhelmed—with data and information. But this narrative is misleading. Our main challenge does not lie in being bombarded with too much information (our brain is always flooded with more sensory data than it can process). Overwhelm occurs only because we lack an apt framework to make the flood meaningful.
That little insight leads me to define what “failure” and “success” looks like for me—for my next book, and for these letters. If I only add to the flood, that’s failure. But if I can help make the flood meaningful, then I’m succeeding. (Let me know!)
On that note: Last week I was in Houston, to see the aftermath of Hurricane Harvey first-hand and also to attend a meeting of the North American Energy Standards Board.
While there, I told the board members a short story about Christopher Columbus. It’s an important story, because it offers insight into how we can make the present flood meaningful.
It’s also a story about how we should learn from the mistakes of history. Because like Christopher Columbus, we too have set out on bold voyages. Like him, we’re making big new discoveries. But like him, we’re trying to jam them into old (mental) maps.
In 1492, Columbus famously sailed west in search of a shorter route to the spice riches of Asia. He found America. But he was convinced it was Asia. (Some argue that he died still believing that he had in fact found Asia). Why? Because Noah had three sons. And after The Flood, they fathered the three races of Man: Africa, Asia and Europe. That was humanity. That was the world, full stop.
His mental map prevented Columbus from making sense of his own discovery. It’s why “America” isn’t named after Columbus, but after Amerigo Vespucci. Amerigo was the Italian explorer who, some 10 years after Columbus’ voyages, popularized the insight that the lands Columbus had found were, in fact, a mundus novus. A new world. By helping Europeans to make that mental shift, Amerigo unleashed Europe’s capacity to navigate that new world—for good and ill. (From a trade perspective, explorers’ disappointments about the absence of marketable spices and silks turned into excitement about valuable new commodities this new world might reveal: tobacco, sugarcane, corn, potato…unexpected crops that profoundly reshaped Old World diets.)
To make better sense of our present-day voyages of discovery, we need to follow Amerigo’s example: don’t just try to ram all the fresh news of our age into our head (a futile and anxious task). Instead, challenge the maps onto which we are trying to ram them, so that our thinking evolves with our reality. It’s the smarter, wiser approach. Instead of feeling overwhelm, we can adapt our awareness, so that it all makes better sense.
So take politics. We cannot plot Donald Trump onto a traditional linear political map of Left versus Right. But add a second axis—say, Open vs Closed—and suddenly we can (hint: he’s Closed-Right).
Or take China. We cannot understand contemporary China through tired, monochrome clichés like “red” and “dragon”. (There is a “red China”, yes, but my own doctoral work shows that there is also a “blue China”.) Once when we start to see China’s other color(s), the information and behaviors coming out of China make a lot more sense.
Or take AI. Most public discourse looks at artificial intelligence through the lens of automation—a zero-sum game in which the machines replace us humans. But that’s only one role that AI might play in our future. We could also look at AI through the lens of augmentation—a game in which AI helps us to do more with available (human) resources. Given that most advanced countries are aging rapidly, and that workforces are going to start shrinking, that’s a game we want AI to play.
Or take urban planning. Many cities still map their land into discrete zones: commercial here, residential there. But how can that map make any sense today, when every residence is a potential AirBnb business and every private driveway is a rentable parking space?
Map-making is about:
- Becoming aware of the maps that we’re already using…
- Testing their validity in the face of recent events and discoveries…
- Drawing new maps that better describe the new world we’re in…
It’s one way that we can consciously adapt our thinking in a period of rapid change. And those communities, businesses and individuals who see the new maps soonest will, like Amerigo, be the ones that history remembers best, because they set the frame in which all subsequent events happen…
I’ve been in Australia for the past couple weeks to write, to research—and occasionally, to share with other people what I’m writing and researching about. Highlights included a talk at the Committee for Adelaide on June 26 and a keynote at DATA61+LIVE in Melbourne on June 28—that, and celebrating Canada’s 150th birthday Down Under with friends at an ice hockey match (seriously). I wanted to celebrate by taking part in Canada’s other national sport (lacrosse), but it’s winter down here, so…
I’m pretty optimistic about this ‘second Renaissance‘ we’re living through, but I also try to be sober about the case for pessimism. I think, for example, of Montezuma II (1466-1520)—the Aztec emperor who ruled over his civilization’s greatest expansion…and its fall to Hernan Cortez and his Spanish conquistadors. The Franciscan friar, Bernardino de Sahagun, gathered Aztec accounts of their downfall just a few years after the events took place. As he wrote in his chronicle:
The Mexican king Montezuma sent his sorcerers
Who were to cast a spell on the Spanish.
And when they failed, he sent a second group of messengers:
The soothsayers, the magicians and the high priests.
But it was to no avail.
They could not bewitch the people…
Well, of course they couldn’t. Curses aren’t real!
That is exactly how the Spanish reacted, too. Their techno-rational worldview was impervious to magic spells and mythic rituals. But those same spells had always worked before, among us Aztecs. If you were a sorcerer, and you publicly cursed me, that curse became part of our shared reality. You knew you’d cursed me, I knew I’d been cursed, everyone who saw and heard about it believed that I’d been cursed…Your curse was an uncontested fact of our world, as “true” as my public claim that “My name is Chris” is true.
So today, against anti-immigrant commentators in the UK media who point to the welfare burden of migrants, I cast a fact!: Immigrants to the UK pay £15 billion more via taxes than they withdraw via social benefits each year. Or against climate-change denial back home in Canada’s agricultural belt, I cast a fact!: Wheat yields fall 5-10% for every 1-degree rise in global average temperature.
Then my hands fall, helpless. My facts have no power, because they do not cross into this other culture that has invaded. And I am no ordinary fact-caster. I am a high priest of our techno-rational world, anointed at the holy alter of Oxford University itself! (Although I only wear my robes when I’m back in Oxford).
It matters not. My protective magic is failing. And I fear the consequences for all of us. History suggests the stakes are very high…
For months now, I’ve been trading letters with Doug Robertson, author of two impactful books in the philosophy of science: The New Renaissance: Computers and the Next Level of Civilization (1998) and Phase Change: The Computer Revolution in Science and Mathematics (2003).
For Doug, too, the present is a second Renaissance. If we want to appreciate the full significance of the time we’re now living in, we need that historical context. I owe Doug a deep debt for helping me grasp that context, as it pertains to science and innovation.
It is fashionable among economists to argue that innovation is slowing down. (I published an article challenging economists who hold this view a few months ago.) The slow-down argument is only true if one measures innovation as an economist does, using GDP growth statistics. But science itself is flourishing.
As Doug eloquently puts it,
If the print revolution of the Renaissance was the lighting of a single candle in a pitch-black field at midnight, then today’s digital revolution is the sunrise.
Doug’s analogy is not mere poetry; it’s an accurate description of the difference in scale between the impact of print and the impact of digital upon civilization’s information resources.
All big breakthroughs in, say, astronomy over the last few decades were impossible in the pre-digital age. But now we’ve determined the age of the universe. We’ve discovered thousands of planets around other stars. We’ve discovered gravity waves. All pre-digital astronomical findings, from the Mayan calendars to Copernicus’ sun-centric universe, were obvious in comparison. Says Doug: “It is no exaggeration to say that astronomy begins with the invention of the computer.”
The same can be said in almost every branch of science and math. The sun is just now peeking over the eastern horizon. In a very real sense, we are witness to the dawn of civilization.
No wonder we’re unsure about the future.
Age of Discovery is a much needed dose of perspective in our increasingly short-term focused world.Dominic Barton