I was in Washington, D.C. this past week—the talk-shop capital of the world. I attended a conference on the future of war and spoke at a conference on the future of energy. In between, I took in Mark Zuckerberg’s hearings on Capitol Hill—and even found time to binge on a season of West Wing. (In the 2000s, it was serious political drama. Now, it’s good comedy. What seemed scandalous for the White House fifteen years ago looks so cute today.)
Sucked Into D.C. Coffee Row
I’ve been trying to get my head out of politics for the last couple of weeks, but in D.C. that’s impossible. The first question everyone asks you is, “So, what do you do?” (Here, networking is a way of life.) Then there’s a mandatory 10-minute conversation about the last Trump-smacker: his latest Tweet, or the latest story to break on Politico or The Hill or NYT or WaPo. (This is a city that votes 90% Democrat.) Then they ask you your name.
Other than Zuck’s Facebook testimony, the biggest story on everyone’s lips in D.C. this past week was A Higher Loyalty, the forthcoming book by former FBI Director James Comey. Technically it’s an autobiography of Comey’s full career in law enforcement, but most people are only interested in the last chapter—his time with, and firing by, Donald Trump.
The title references the now infamous, intimate ‘loyalty dinner’ that Comey attended at the White House mere days after Trump’s inauguration. Trump allegedly asked his FBI Director to pledge his loyalty, and Comey, demurring, pledged ‘honesty’ instead.
A few months later, in May 2017, Trump fired Comey. That action prompted the Justice Department to appoint a Special Counsel to look into Trump’s Russia connections (if any), and here we still are, a year later, gobbling up every scrap of this story as fast as it emerges.
The release of Comey’s book this week marks another feeding frenzy. And while the talking heads on MSNBC and Fox News each push their particular narratives, the bigger question will be ignored completely: Is there something ‘higher’—higher to which all members of a society (even the democratically elected leader) owe loyalty? And if so, what is that thing?
This is a really good, really timely question.
The Constitution Isn’t High Enough
The obvious (but, I think, wrong) answer is ‘the constitution’. The U.S. constitution allows a two-thirds majority of the Senate to remove a president who has committed ‘Treason, Bribery, or other High Crimes and Misdemeanors.’ Democrats in this town dream that one day Special Counsel Robert Muller’s investigation will find a smoking gun under Trump’s pillow, leaving the Senate—and the American people—no choice but to evict, and convict, The Donald.
More likely, I think, Muller’s investigation will find ‘evidence of wrong-doing’—something in between ‘good’ and ‘evil’. And everyone will be just as divided as before—or, more likely, present divisions will worsen—because the process and the law leave ample room for judgment and interpretation. Was the investigative process ‘fair’? Can we ‘trust’ the process? And even if we do trust the process, does the ‘wrong-doing’ rise to the level of a ‘High Crime’—high enough to overturn the voters’ choice from 2016?
If there is to be something to which members of a society owe a ‘Higher Loyalty,’ it must be something above a country’s constitution. It must be that high place upon which we stand when the constitution is read.
Sociologists talk about trust. Economists talk about social capital. Biologists talk about the evolutionary advantages of cooperation. Political scientists talk about civil society. Lawyers talk about the distinction between ‘ethical’ and ‘legal’. Comey talks about a ‘higher loyalty’. They’re all investigations of the same idea: a healthy society depends upon more than its rules. It also depends upon a shared sense of why the rules matter.
But in a democracy, what’s higher than the constitution?
This Is Getting Biblical
One answer is: the covenant.
Constitutions define states; covenants define societies. Constitutions define the political, economic and legal system; covenants define the moral context in which these systems operate. Constitutions are contracts; covenants are the relationships of social life—relationships that, like ‘family’, cannot be reduced to legal language and market exchanges.
Among this Readership are heavyweights in political theory, constitutional law and sociology, and I sent out a little survey asking a few of you for good books that dig deeper into this idea of ‘covenant’. The #1 recommendation I got back was The Dignity Of Difference, by Jonathan Sacks. At first I was unsure—Jonathan Sacks is one of the most senior rabbis in the Jewish world, and I didn’t want to confuse his religious idea of covenant with the public idea of covenant. But it turns out that Jonathan spends a lot of time thinking about the latter.
In A Higher Loyalty, Comey talks about a looming constitutional crisis. Jonathan would say: Comey is mistaken. America doesn’t face a constitutional crisis; it faces a covenantal crisis. The latter is different, and deeper. It is a crisis over the question: What are the values that govern our society?
The Best Things In Life Are Never Self-Evident
America’s covenant is not its constitution (signed in 1789), but its Declaration of Independence (signed in 1776). That earlier document famously begins:
We hold these truths to be self-evident: that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
The irony, of course, is that these truths are anything but self-evident. Throughout most societies in most of history, the social order has rested upon the idea that all people are not created equal. What the signatories of that Declaration really meant to say was, “Rather than rest upon the ideas of the past, we are going to build a new society upon the idea that each person (i.e., ‘white man’) is owed an equal measure of human dignity.” It took more than a decade of further debate to encode that social ideal into a state constitution.
The Declaration of Independence was a declaration of the moral objective toward which American society should strive. It’s echoed in the Preamble of the U.S. Constitution:
We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution…
These blocks of text thrum with shared moral meaning. They are among the best-known sentences in the English language. What has happened to throw this covenant into crisis?
And again, the obvious answer that many people give (‘Donald Trump’) is wrong.
For A Covenant To Hold, We Must Dignify Difference
I think Jonathan would say that the warring narratives between Fox News and MSNBC, between Trump advocates and Trump haters, are a reflection of what ultimately happens when we erode the boundary between politics and religion.
For a society’s covenant to remain strong and healthy, Jonathan argues, these two spheres of social life each need a separate space to play their respective roles. Religion (and by ‘religion’, Jonathan really mean all forms of deeply felt group association) is supposed to be the space in which we build identity, community and solidarity. Politics is supposed to be the space in which we work out the differences that inevitably develop between these groups.
We need both. We are social beings. We are meaning-seekers. Group association is an important part of how we become ourselves—and become good citizens. (‘The universality of moral concern is not something we learn by being universal but by being particular.’ – Jonathan Sacks)
But we also need politics. Precisely because so much of our meaning and identity arises from experiences within our particular group, our own meanings and identities will never be universally shared. Living together requires a layer of cooperation that straddles these differences.
Society starts to get ugly whenever these two spheres of social life (the space where we belong, and the space where we cooperate) collapse into one.
When religion is politicized, God takes over the system. When politics turns into a religion, the system turns into a God.
Either way, Jonathan explains, respect for difference collapses. When religion is politicized, outsiders (non-believers) are denied rights. The chosen people become the master-race. When politics turns into a religion, outsiders (non-comformers) are granted rights if and only if they conform (and thus cease to be an outsider). The truth of a single culture becomes the measure of humanity.
Progressives vs Reversers
These concepts (thank you, Jonathan) offer us a fresh way of thinking about what the heck is going on in U.S. politics at the moment.
Is it possible that Democrats have been guilty of turning politics into a religion that demands conformity? Yesterday a New York Times op-ed talked about how many Democrats have stopped talking about themselves as ‘liberals’ (because it now carries tainted connotations in U.S. discourse), and substituted the word ‘progressive’ instead.
The distinction matters. Says the op-ed writer, Greg Weiner:
‘Progressives’ are inherently hostile to moderation because progress is an unmitigated good. There cannot be too much of it. For ‘progressives’, compromise (which entails accepting less progress) is not merely inadvisable but irrational. The critic of progress is not merely wrong but a fool.
Because progress is an unadulterated good, it supersedes the rights of its opponents.
This is one reason progressives have alienated moderate voters who turned to Donald Trump in 2016. The ideology of progress tends to regard the traditions that have customarily bound communities and which mattered to Trump voters who were alarmed by the rapid transformation of society, as a fatuous rejection of progress.
Likewise, is it possible that Republicans have been guilty of turning religion (be it guns or Jesus) into a test of citizenship—a test that demands conversion?
I think maybe yes. And if so, that’s the fundamental problem, because both sides are doing something to denigrate difference. Unless we all dignify difference, no social covenant can hold. Says Jonathan:
‘Covenants exist because we are different and seek to preserve difference, even as we come together to bring our several gifts to the common good.
…This is not the cosmopolitanism of those who belong nowhere, but the deep human understanding that passes between people who, knowing how important their attachments are to them, understand how deeply someone else’s different attachments matter to them also.’
Having just spent a whole week in Washington, D.C., I can breezily say that the best way to heal America’s divisions is for everyone to go back to Philadelphia, hold hands together, and rededicate themselves to their shared moral project: to recognize human equality and oppose sameness.
Of course, I doubt either side is ready to lay down arms and make a new covenant just yet. War is clarifying. It divides the world into us and them. Peace is the confusing part. It provokes a crisis of identity. In order to make peace with the other, we must find something in common between us, worthy of mutual respect.
Right now that’s a tall order—not just in U.S. politics, but in the domestic politics across the democratic world. (‘Populist’ isn’t a label that’s intended as a sign of respect.) But maybe all of us can start asking some good questions, wherever we are, that (a) make us sound smart, but also (b) start the conversation in our community about the covenant to which we all owe a ‘higher loyalty’:
1. Tribalism (i.e., my particular group’s ideas dominate) won’t work. Universalism (i.e., one human truth overrides my particular group’s ideas) won’t work either. So what can?
2. ‘Those who are confident in their faith are not threatened but enlarged by the different faith of others.’ (Jonathan Sacks) Are we feeling threatened? Other than attacking the other, is there another way to restore our confidence?
3. Is all this week’s coverage of James Comey’s new book helping to bring people closer to his main message (‘There’s something higher that spans our differences and makes us one society’), or drawing us further away?
Admittedly, that last question is purely rhetorical—we all know the answer—but somehow a list feels incomplete unless it has three items. 🙂
Brave voyages, to us all,
I wonder, do you ever share my feeling that ‘fake news’ and ‘post-truth’—these phrases that get thrown about every day by the commentariat—cloud our understanding rather than clarify it? To me, such phrases—frequently used, fuzzily defined—are like unprocessed items in the inbox of my brain. I pick them up. I put them down. I move them from one side of my desk to the other, without ever really opening them up to make sense of what they are and what to do with them.
One of my friends, Dr Harun Yilmaz, finally got tired of my conceptual fuzziness, and he and a colleague wrote a brief book to tell me what fake news is, how the post-truth era has come about, and how to win public trust nowadays—for power or profit—now that the old trust engines are broken. It’s now become my little bible on the subject. (And although I’d rather keep the insights all to myself, he’s just published it as an affordable e-book called Marketing In The Post-Truth Era.)
(I’ve never met Harun’s co-author, Nilufar Sharipova, but Harun and I go back many years. We did our PhDs side-by-side at Oxford. While I studied China, he studied the Soviet Union—specifically, how Soviet politicians and historians constructed national stories to give millions of people an imaginary, shared past that helped explain why the USSR belonged together. (I’m sure his thesis was very interesting, but I mostly remember how Harun bribed his way past local Kazakh officials with bottles of vodka to peek into their dusty Soviet archives.)
Harun’s been studying ‘fake news’ and ‘post-truth’ since it was still good ol’ ‘propaganda’—and that, from the very best in the business.
The Rise Of Fake News, In Three Key Concepts
1. The Truth Machine
To understand the post-truth era, Harun would say, we first need to understand the prior, truth era. Even back in the truth era, pure truth or real news never existed. Whether we judged a message to be true depended on (a) how the editor of the message presented it and (b) how we the viewer perceived it.
Here’s a visual example of (a) from the Iraq War. With the same picture (middle frame), I can present two completely different messages, depending on how I crop it.
Everything you read in a newspaper or hear on a radio, every question asked and answered, is the outcome of a human decision to accord it priority over another item. (Simon Jenkins, Columnist, The Guardian)
What about (b)? How did we perceive ‘the news’ in the era before some people started calling it ‘fake’? The honest answer—for me, at least—was that I mostly took the news to be something ‘real’.
In the post-truth era, when ‘fake news’ has now become a frequent problem, ‘critical thinking’ has become a frequent antidote. Given social media, which allows anyone to say anything to everyone, we need to educate ourselves and our children to think critically about everything we see, read and hear.
Harun would say, we needed—and lacked—this skill back in the age of mass media, too. Yes, the power to speak to large audiences was more concentrated. (You needed a broadcasting license, a TV station, a radio station, a newspaper or a publishing house—and not everyone had those.) But that concentration of power didn’t necessarily make the messages these media machines churned out more trustworthy. That was our perception—and, arguably, our naïvete.
(Here’s a personal anecdote to think about. Back in the mass media age, when I lived in China, a highly educated Chinese friend of mine argued that Chinese audiences were far more media-savvy than Western audiences. They at least knew that everything they saw, read or heard from mass media had a specific editorial bias. In China, you didn’t pick up the daily paper to read ‘the news’; you picked it up to infer the Communist Party’s agenda and priorities.
Now, to be fair to the ‘Westerners’ among us, we were never completely naïve consumers of mass media. I knew that the New York Times had its bias, which was different from the Wall Street Journal. But it’s also true that I never saw, nor enquired into, the editorial process of either. Who decided which stories were newsworthy and which weren’t? Exactly what agendas, and whose agendas, were being served by the overall narrative?)
The point is, even in the truth era, truth was something manufactured. And ‘The Truth Machine’, as Harun and his colleague call it, had three parts: experts, numbers and mass media. When orchestrated together—the experts say, the numbers show, the news reports—these three sources of legitimacy could turn almost any message into ‘truth’.
Throughout the 20th century, governments all over the world used The Truth Machine to dramatic effect: policy priorities were fed in one end, and popular support came out the other. (For CIA history buffs, Harun gives a great example from the 1950s. In 1950, Guatemala overwhelmingly elected a new president who promised to wrest control of the country’s banana-based economy back from the United Fruit Company (an American corporation that owned all the ports and most of the land) and return control to the people. United Fruit and the U.S. government deployed experts, swayed journalists, staged events and made up facts to help the American public reframe the situation in Guatemala as a communist threat to American values and democracy. The new Guatemalan president had no links to the Soviet Union, yet when the CIA helped to remove him via military coup in 1954, public opinion in the U.S. held it up as another victory in the Cold War.)
Businesses, too, have been using The Truth Machine for decades to wrap commercial messages in the legitimacy of ‘truth’. A serious-looking dentist in a white uniform (expert) advises us to use Colgate toothpaste. Mr Clean bathroom cleaner kills 99.9% of bacteria (numbers). And we’re shown these advertisements over and over again (mass media).
2. The Funhouse
The more accurate way to think about our ‘post-truth’ problem today, Harun argues, is not that ‘real news’ has suddenly become drowned out by ‘fake news’. Rather: whereas once there was only one, or very few, Truth Machines operating in society, now there are many. And they’re working against each other, spewing out contradictory truths. The Truth Machines themselves have become a contradiction, since the more competing truths they manufacture, the more they undermine public trust in the authority of numbers, experts and mass media.
We cannot simply trust convincing-looking numbers anymore, because we are now bombarded with numbers that look convincing. We cannot simply trust experts anymore, because we are now bombarded by experts telling us contradictory things. We cannot trust mass media anymore, because mass media is just full of experts and numbers—which we know we can’t simply trust anymore.
The Truth Machine is broken, and so it’s like we’ve gone to the amusement park and stepped inside ‘The Funhouse’—another great metaphor, courtesy of Harun and Nilufar. In the truth era, we assumed that individuals would read and listen to different messages and make a rational choice between them. Now, multiple, contradictory truths create so much confusion that individuals start to doubt everything, like in a hall of mirrors. People think, ‘There is no way for me to know what is objectively true anymore.’
3. The Group
What we all need is a new source of sincerity. And we’re finding it: within our social reference group. It’s our first and last refuge of belief and principle about what is true and what is untrue. Rational analysis has become unreliable, so we are reverting to our oldest strategy for making sense of our world.
Groups as trust machines
The simple fact is that we are social animals. And so ‘groups’ are a real, natural, organic part of our lives. Social science is full of simple experiments, going back to its beginnings, that demonstrate how our group influences how we as individuals think and behave.
(One of the oldest and simplest experiments was conducted in the 1930s by one of the founding fathers of social psychology, Muzafer Sherif. He put participants alone in a completely dark room, except for a single penlight at the other end. He asked each person to estimate how much the point of light moved. (In fact, the light didn’t move at all, our eye muscles fatigue and twitch whenever we stare at something long enough, and those twitches cause us to see movement where there isn’t any.) Individual guesses varied widely, but once the participants got together, those whose guesses were at the high end of the range reduced theirs, and those whose guesses were at the low end raised theirs. Take-away: The group norm becomes the frame of reference for our individual perceptions—especially in ambiguous situations.)
The same technological forces behind the breakdown of The Truth Machine are also behind the rising power of groups. Organic social groups can form more easily now—around shared passions and experiences—than was previously possible. Small, scattered communities of interest have become global networks of like-mindedness. Coordinating messages and meetups, once expensive and difficult, is now free and frictionless. And social groups can filter more easily now, too, creating echo chambers that reinforce opinions within the group and delete dissonant voices.
Making Group Truths
While some of us bemoan the ‘polarization’ or ‘Balkanization’ of public opinion, some influencers—politicians, advertisers—are simply shifting strategies to better leverage this re-emerging power of group trust. More and more influencers are figuring out that, although the old Truth Machine is broken, a new ‘Truth Machine 2.0’ is born. In this post-truth era, a manufactured message can still become trustworthy—if it reaches an individual via a group.
In fact, this new Truth Machine generates more powerful truths than the old Truth Machine ever could. There was always something artificial about the truths that the old machine manufactured; they came at us via those doctors in lab coats and news anchors sitting behind their news desks and pretending to scribble notes behind their news desks. But these new truths come at us organically—with fewer traces of the industrial process that spawned them.
Harun points to the ‘Pizzagate’ episode during the 2016 presidential election—maybe the wildest example of the power of this new-and-improved truth machine. Stories had circulated on social media that Hillary Clinton and other leading Democrats were running a child trafficking ring out of a pizzeria in Washington, DC. In December 2016, one proactive citizen, a 28-year-old father of two, burst into the pizzeria with his AR-15 assault rifle to free the children. He fired shots at the fleeing employees, then searched for the children. He became confused (and surrendered to DC police) when he didn’t find any.
The mainstream media debunked the child-trafficking story—which, for some, only confirmed its truth. According to public opinion polls at the time, 9% of Americans accepted the story as reliable, trustworthy and accurate. Another 19% found it ‘somewhat plausible’.
Is that a lot? I think it is: with almost no budget, no experts, no analysis, no media agency, an absurd fiction became a dangerous truth for millions of people.
Marketing Group Truths
Harun’s book with Nilufar is aimed at businesses—to help marketers rethink marketing in an age when the public has lost trust in conventional messengers. And this age does demand a fundamental rethink of the marketing function. In the industrial era, business broke consumer society into segments. We were ‘soccer moms’ and ‘weekend warriors’, ‘tech enthusiasts’ and ‘heartland households’. These segments weren’t organic. They weren’t real groups that its members identified with. They were artificial, rational constructs meant to lump together people with shared characteristics who would perceive the same message similarly. And they worked, so long as The Truth Machine worked.
‘Group marketing’ (a deceptively simple term that holds deep insight) accepts that experts, numbers and mass media are losing their authority to sway our choice-making. We just don’t trust these mass-manufactured truths anymore. But we do trust our group(s). And so, more and more of our buying decisions are based on the logic, ‘I’ll buy this because my group buys it.’
Within this growing phenomenon, Harun and Nilufar have clarified an important new rule in how to create successful brands. It used to be that a company had a Product, attached a Story to that product, and this P+S became a Brand that people Consumed. P+S=B, and B –> C.
Group marketing demands a new equation. The stronger the corporate Story, the less freedom groups have to tell their own stories with a Product, and the less useful it is to the group as an expressive device. So the goal is to get the Product into the Group’s hands with a minimum of corporate storytelling. Instead, let the Group build the Brand as the sum of its members’ Individual Stories. Harun and Nilufar compiled several successful examples, my favorite of which is how Mountain Dew infiltrated skateboarding groups in Colombia. (Look for this tactic, and you start to see it everywhere…)
Truth As A Disease
To repeat myself: more and more of our buying decisions are based on the logic, ‘I’ll buy this because my group buys it.’
What worked for Pepsi’s Mountain Dew product also worked for Cambridge Analytica’s political messaging. Ideas were manufactured, planted into groups, and accepted by group members as truth because the ideas came to them via the group.
This is where business and politics differ. Businesses can adapt how they persuade consumers to buy things to this new group-centric approach, and the economy will still function fine. It’s less clear that we can say the same about our politics.
Liberal democracy isn’t built to operate on truth. It’s built to operate on doubt. Liberal democracy is an Enlightenment project from the Age of Reason. It assumes that truth cannot be known in advance (a priori, as the philosophers say). Instead, society must grope toward the truth by making guesses—and being sensitive to what the people find out along the way. Democracy is an exploration. It depends upon a shared commitment to discovery.
Now, thanks to all these competing Truth Machines, a pre-Enlightenment culture of truth is returning—and spreading. It is a blight that threatens the whole ecology of our political system. When too many people believe they have found truth, democracy breaks down. Once truth has been found, the common project of discovery is complete. There is no more sense in sharing power with those who don’t realize it. There is no more sense in curiosity, in new evidence.
Curing Ourselves Of Truth
To rescue the possibility of groping toward Paradise democratically, we need to inject our own group discourses with doubt.
I don’t know how we manage that feat. (But I’m open to suggestions!) I only know that it’s the logical answer. If an idea is foreign to the group, the group rejects it. Therefore, only group insiders can introduce the group to doubts about its own shared ‘truths’.
Only Nixon could go to China.
And so (I bet you thought you’d never hear this one), the world needs more Nixons.
Our Shared Awareness Of Atomization
I’m guessing we all know the sensation of being detached, somehow, from the whole: when we catch ourselves in the act of reaching impulsively for our mobile phone and feel an idle guilt about our addiction to consuming content that somehow feels closer to junk food than vegetables; when we give meditation a try, find it helpful for some inexplicable reason…and then struggle to find the time to meditate again; when we get out of the city for a holiday, widen our vistas, and then feel oddly unfocussed for the first few days back at the office.
(I’ve just experienced the latter. This past week, I went cross-country skiing with an old friend in the Austrian Alps. At the top of a long uphill climb, we paused to catch our breath and take in the view. The air was perfectly still. The sky was a cloudless blue. The mountain peaks were a brilliant white. I closed my eyes and felt the sun warming my closed eyelids. I could hear a few birds singing in the surrounding forest; off to my right, i could hear pine needles crackling as they melted free of their snowy cocoons. I heard my breath. I felt it. For no particular reason, I was profoundly happy.
…and since returning to London, it’s taken me a solid two or three days of circling around my laptop to recover the focus I need to write.)
Enterprising minds have spotted our discontent with disintegration and turned reintegration into an industry. Grocery delivery services here in London emphasize, variously, ‘fresh’, ‘simple’, ‘organic’ or ‘mindful’. Meditation apps are booming. Yoga makes you balanced. Electric cars make you clean. To restore lost relationships — with our food, ourselves, our community, our environment, with the truth — has become one of the most compelling stories reshaping consumer behavior.
We shouldn’t be surprised that it has become one of the most compelling stories reshaping politics, business and society, too. Economists, sociologists, scientists, tech titans and politicians today all ply us with the need for, or the promise of, restoration. (Start to listen for it, and you start to hear it everywhere…)
An Autopsy Of Our Mind
A couple letters ago, I shared a brief scan of how different researchers across the social sciences today explain why society is disintegrating, and what to do about it. Every branch of social science offers part of the diagnosis, and part of the cure.
Their diagnoses all relate to the fragmentation that is happening ‘out there’, in the external world. But, as we’ve all experienced, the fragmentation is also happening ‘in here’. A deeper disintegration is underway, at the level of our consciousness.
This deeper disintegration is hard to research. It doesn’t yield data the same way that, say, economic inequality does. Yes, we can point to plenty of indirectevidence. The extreme cases show up in our public health statistics — rising rates of youth suicide (here in the UK, suicide is the leading cause of death among people aged 20–34), the opioid epidemic and other substance abuse and soaring numbers of mental health cases, for example. But we cannot cut open our minds to perform an autopsy; we cannot compare the brain of a youth twenty-five years ago with the brain of a youth suicide victim today and observe how that person possessed a greater sense of belonging-to-something than this person did.
Because this internal reality of disintegration is hard to show empirically, it’s hard for us to accept it as ‘real’. (Wherever we live, we’ve all witnessed the slow struggle for society to take mental illness seriously and to overcome the stigma that’s been attached to it.) And yet, it clearly is real. We’ve all felt it. We all know the behaviors, the hungers, that it can drive. We all know the fleeting bliss that a sense of reintegration can generate.
To better understand the disintegration that right now seems to be taking place between our own ears, I’ve been reading a book by Jean Gebser called The Ever-Present Origin. It’s basically a history of consciousness — a history of how different cultures throughout history have had different awarenesses (if that’s a word). It’s a thick book. It’s a dense book. I wouldn’t exactly recommend it, frankly, except that it’s one of the most important books in post-modern philosophy. I hesitate even to write about it, because it will take me several more years to digest. But it is mind-blowing. Like Yuval Harari’s Sapiens, but less accessible and more insightful.
Gebser (1905–1973) was a German philosopher and linguist, and he first published the book in 1949. It’s obvious that he was heavily motivated by the fresh scars of World War II and by the looming threat of all-out nuclear war, from quotes such as:
The present is defined by an increase in technological power, inversely proportional to our sense of responsibility…if we do not overcome this crisis, it will overcome us…Either we will be disintegrated and dispersed, or we must find a new way to come together.
The restructuring of our entire reality has begun; it is up to us whether it happens with our help, or despite our lack of insight. If it occurs with our help, then we shall avoid a universal catastrophe; if it occurs without our aid, then its completion will cost greater pain and torment than we suffered during two world wars.
But, as anyone who invests years to write a book must, Gebser did possess some hope that a brighter future lay ahead:
Epochs of great confusion and general uncertainty…contain the slumbering, not-yet-manifest seeds of clarity and certainty.
Make Me Whole Again
Gebser’s hunch was that we can’t solve the disintegration that’s underway ‘out there’ without also solving the disintegration that’s underway ‘in here’. We won’t solve the external crises of fake news, or inequality, or political extremism, or ecological crises, without also solving our internal crises of anxiety, emptiness, self-absorption and confusion.
That’s because, for Gebser, today’s external and internal crises are two sides of the same mistake, namely that ‘we have conceded the status of ‘reality’ to only an extremely limited world, one which is barely one-third of what constitutes us and the world as a whole.’
In other words, the root of our disintegration today is that we’ve denied the reality of everything that could restore our sense of belonging, of integration, of harmony, with our selves, each other and the world.
A Brief History Of Consciousness
The one-third of reality which we do accept as real is the mental. This is the reality of measurable space-time; of measurable cause-and-effect; of time broken into past, present and future; of calendars, goals and project plans; of Cogito ergo sum, I think therefore I am.
And the two-thirds that have gone missing? They are older, earlier aspects of our consciousness that we dismissed in order to give primacy to our modern, mental awareness.
The first, Gebser calls ‘the magical’. The magical is the spaceless, timeless oneness that I sensed last week in the Austrian Alps, or whenever we still gaze up at a starry night, or pray in a crisis moment, or whenever we lose ourselves in the beat of the music that’s playing. Nowadays, we are deeply suspicious of anything labelled ‘magic’. But there was a time in human pre-history when everything in our awareness was magical. We had no notion of using measurable space and time to separate cause and effect, and so everything that happened seemed connected to everything else. Rain dances made rain; curses punished wrong-doers; an arrow drawn on a cave painting ‘killed’ the buffalo before the hunt even began. In the magical phase of human consciousness, reality was one big unified thing within which we must listenin order to survive. (I hear, therefore I am.)
The second aspect of reality that we’re missing today, Gebser calls ‘the mythical’. Mythical consciousness first began when we discovered that the oneness of nature was, in a lot of ways, more like a circle. Natural events recur, rhythmically. Once this awareness of ‘recurrence’ became part of our reality, reality became, not just a oneness, but a polarity: day and night, summer and winter, birth and death, yin and yang. We became aware that the polarity of nature extended into us: the body and the soul. We began to weave events, objects and people together into stories that gave reality greater coherence — that made all the recurrences and balances fit together. We imagined ourselves as heroes in these stories; we imagined life as a hero’s journey; we shared collective dreams as a community. In the mythical phase of human consciousness, the world became a story in which we must speak in order to survive. (I speak, therefore I am.)
For Gebser, our third aspect of consciousness, the mental, emerged when we began to go off-script (around 2,500 years ago). Instead of finding our roles within the stories inspired by nature’s patterns, we began to ad lib our own intentions and journeys, by drawing instead upon something inside ourselves. Our mythical awareness of nature’s polarity was replaced by our mental awareness of a duality: us, outside of nature.
Once we stepped outside of nature, we could begin to direct our own lives. Time, which in the magical world had been one single big moment and in the mythical world had been a circle we traced over and over again, became the line (past, present, future) along which we played out our individual intentions. Time was now finite for us, and measuring time — conquering time! — began to matter. Space, which in the magical and mythical worlds had been irrelevant to the fulfillment of our lives, now imposed itself as a limit on how far we could go. Space became finite for us, and measuring space — conquering space! — began to matter.
If you grasp that last paragraph, then you’ve grasped the past 2,500 years of how our sense of ‘reality’ has been changing. In short: we’ve been getting better and better at measuring space and time, which (a) gives us more and more power to exert our own intentions over nature but also (b) draws us further and further away from the oneness of space and time that we used to know intuitively.
(To drive this point home, Gebser offers two seminal examples: the discovery of linear perspective during the Renaissance, and the discovery of space-time in the late 19th and 20th century — which is what got me re-reading Stephen Hawking. They’re fascinating examples, and I’ll digress into them at the bottom of the page, if you’re interested.)
Finding The Real In A Post-Truth World
Fast-forward to today, and Gebser’s history of human consciousness gives us a fresh lens for understanding the biggest changes underway today.
Take the mega-problem of post-truth politics. Why do once-powerful arguments based on facts and evidence suddenly seem powerless? For Gebser, this is a familiar pattern of exhaustion. As myth replaced magic, the power of magic spells weakened into mere bewitchment, and finally into empty rituals and superstition. As mind replaced myth, the epic explanations for everything became mere stories and entertainment.
And now ‘facts’ are becoming mere ‘alternatives’.
Our instinctive reaction (mine, anyway) is to leap to the defense of Reason. We must re-educate ourselves on how to think critically, how to recognize bias, how to apply logic and to be ruled by the knowledge that emerges from scientific methods. We must put wishful thinking and tribal tendencies back in their bottles — through heavy regulation, if necessary.
Except we can’t, Gebser would say. That is precisely the conceit that led to the shock of a President Trump and a Brexit vote (or, he argued in his own lifetime, to two World Wars).
Among American voters in 2016, Donald Trump won hearts, not minds. He didn’t give any reasoned arguments. He spoke instead in mythic terms about an imaginary America under siege. He held up tribal totems — the flag, guns, male aggression. In a recent New York Times piece, the columnist David Brooks bemoaned this neo-tribalism. Gebser would say: it has always been part of us.
The magical and the mythical are real, Gebser would explain, and that is the lesson that we need to take away from the shock events of recent years. Not real in the same way that we measure space and time, but real in our consciousness nonetheless. Modern, mental humanity gets very uncomfortable at the insinuation that reality has magical and mythical aspects. We deny the possibility. But, Gebser argues, that only makes us fools. ‘Those who are unaware of these aspects, fall victim to them.’
The resurgent power of magic and myth in society is a sign that our Age of Reason — the age of mind over everything — is reaching exhaustion. The project was flawed from the beginning, Gebser would say, because we can no more purge the magic and mythical from our reality than we can purge them from our language. Every time that we feel ‘disconnected’, or ‘unbalanced’, or feel anxious that we’ve ‘run out of time’, we betray our yearning to get back to the original oneness of space and time that’s now been completely carved up by rational thought.
In this moment of mental crisis, Gebser predicted, ‘soon we will witness the rise of some potentate or dictator who will pass himself off as a ‘savior’ or prophet and allow himself to be worshipped as such.’ (I’d say we’ve reached that point.)
But that prophet is false. He is, in Gebser’s words, ‘less than an adversary: he is the ruinous expression of man’s ultimate alienation from himself and the world.’ (Sounds about right.) He demonstrates that the latent, neglected power of magic and myth can still move us powerfully, but he does so by lashing out at our mental reality. In the end we’re left more fragmented.
The healthy response in this post-truth age can’t be to deny what reason has revealed to us. And it isn’t to purge magic and myth, either. (We can’t, and more to the point we shouldn’t, since doing so would also purge all emotion and inspiration.) Instead, Gebser thought, we need to ‘renounce the exclusiveclaim of the mental structure’ over what’s real, and reintegrate the magic and the mythic into our consciousness.
‘Like all ages, our generation, too, has its task.‘ It is to learn to see ourselves ‘as the interplay of magic unity and mythical polarity and mental conceptuality and purposefulness. Only as a whole person is a person in a position to perceive the whole.’
So…Where To Begin?
I’m going to be chewing on all this for a long, long time. But my immediate take-aways are these:
- Trust our magic and mythic impulses more. These impulses are everywhere today in our consumer society, in art, in science. Even corporate executives have started talking about making their companies more ‘soulful’. At the same time, we hesitate to follow them, because we don’t understand the rational basis for these impulses. Well, all the above gives us that rational basis, in a meta-sort-of-way. So we should ‘go with them’, and feel more sure about doing so. So long as we bring our mental awareness along with us, we won’t slip into New Age pseudo-spiritualism. We’ll end up somewhere more real.
- It’s time to get past gawking at the inconsistencies and ignorance of the Donald Trumps of the world. Their ignorance is irrelevant to their power, and it’s precisely that power that we need to understand and integrate — in a healthier way — into our politics.
- In history, periods of general confusion and anxiety ultimately arrived at new clarity and certainty. The more ‘awake’ we can be to the conflicts inside us, the sooner we’ll all get there. I like this quote a lot: ‘Our sole concern must be with making manifest the future which is immanent in ourselves.’ That’s deep.
How We’ve Conquered Space And Time
Gebser offers two seminal examples. The first was the discovery of linear perspective during the Renaissance — pioneered by the Italian artist Felipe Brunelleschi, and perfected by Leonardo da Vinci. Linear perspective creates the illusion of depth — of a third dimension — on a two-dimensional surface.
How could a new style of drawing be of historical importance? It makes no sense, until you try to imagine what it was like to try to conquer space without it. Space is three-dimensional. If you don’t have any way of communicating ideas in three dimensions, then space is difficult to master. No two-dimensional picture of the human anatomy can prepare a medieval doctor for what he finds when he cuts open a patient; he can only learn from cadavers — and his own experience. No two-dimensional drawing of a long-standing tower can explain to an architect how to build it; she can only mock-up a model, and hope that her real-life version stands the test of time, too. No two-dimensional drawing of a water wheel, or a clock, or even a knot, can reliably show a novice how to make one; he can only apprentice himself to a master and watch how it’s done.
But da Vinci’s drawings — for complex machines, for giant statues, for soaring bridges — can be followed, even centuries later, to bring his ideas into three dimensional reality.
Until we had a technology to reliably represent space, the reality of space was a sort-of prison that trapped our ideas. But with the advent, and perfection, of linear perspective, suddenly space became our prisoner.
The second example Gebser offers was the discovery of space-time in the late 19th and early 20th century. Basically, we figured out how to think about time as a fourth dimension of space. Mathematicians call these conceptions of 4-and higher dimensions ‘non-Euclidian geometries.’ Just as linear perspective helped us to measure and conquer space, our ability to represent time as ‘just another dimension’ improved our powers to measure and conquer time.
(When Stephen Hawking passed away recently, every newspaper in the world ran an obituary. None helped us to understand the significance of his most famous book, A Brief History Of Time. That book was all about trying to help the rest us understand how physicists think about time as a fourth dimension — and why being able to do so makes a whole new era of scientific progress possible: from nuclear power to mobile phones to quantum computers.
Donald Trump can become President of the United States. Boris Johnson can become Foreign Secretary of Britain. Silvio Berlusconi, whose bongo bongo parties once secured his status as the most debauched leader in the democratic world, is back on top of Italian politics. Far Right extremists can win seats in the German Bundestag.
This is the new political world we are in. How did we get here?
Theories abound. If 2016 was the year of shock, then 2017 was our year to gawk. By 2018, a whole industry of handwringing has started up to explain to us how our expectations had come to be so blind to reality. And now this industry is flourishing.
The clearest theories of ‘how we got to now’ are those put forth in the academic literature, where the rules of debate are explicit and where disagreements are dressed in politeness (more like pistols-at-dawn than revenge porn). Fake news exists in academic publications, to be sure, but much fakery is filtered out by very clear rules about what you can and cannot say to support your views. You can’t say ‘In my opinion…’, for example. You can’t say ‘You are entitled to your facts, and I am entitled to mine.’ More precisely, you can say such things, so long as you can accept the laughter, ridicule and—most damning of all—anonymity—that will follow. In the academy, unless you can say ‘The evidence suggests…,’ and unless you can cite evidence that you believe supports your suggestion, your views will gain few followers.
(The academy pays a price for this clarity, and that price is truth. Academic literature contains no truth. It contains only theories—theories that happen to fit the available facts. Academics may persuade themselves (they may even persuade other people!) that they are right, but unlike the righteousness that priests, prophets and politicians might enjoy, academic righteousness is always at risk: some future facts might prove their beloved theory wrong.)
How Social Science Thinks
In the physical sciences, facts are arranged in a causal flow. To oversimplify: All biology is, ultimately, chemistry. All chemistry is, ultimately, physics. Physics is the fountainhead. So if physicists discover a new fact about reality, then all the scientists working on problems downstream—the chemists and biologists—might need to re-examine their own theories to make sure they still conform to the upstream story.
But ‘How did we get to the new political world we are in?’ is a question for social science. And in the social sciences, it’s unclear where causation begins. Economists love to measure productivity and count money, and theorize about how the economy can explain everything else. Political scientists love to run regressions on election results, and theorize about how politics can explain everything else. Sociologists love to identify the shared ideas that differentiate groups—groups that, say, began with the exact same resources but somehow ended up in opposite situations. Those shared ideas—you guessed it—can explain everything else.
The messy reality, of course, is that the causal flow of social change is not linear. It is, instead, a braided stream:
Social change happens through multiple channels that divide and recombine. Causal flows converge here and diverge there—sometimes reinforcing each other, sometimes cancelling each other out. (I stole this metaphor from my doctoral supervisor, Vivienne Shue, and her latest book, To Govern China.)
Across the academy of social sciences, each discipline is trying to retrace the winding ways that led us to this new and unfamiliar world.
It’s The Economy, Stupid
Economists retrace our voyage into the economic unknown. Globalization is shifting the balance of economic power globally from the Atlantic to Eurasia. Automation is worsening the imbalance of economic power within our economies between labour to capital. Economic growth, which has been driving progress across Europe and North America since the Industrial Revolution, is slowing down, and evidence suggests that it might never recover lost momentum. That evidence includes an aging population, diminishing returns from education and weaker-than-predicted productivity gains from the digital revolution. At the household level, costs of housing and living are soaring, inequality is widening, consumer debt is ballooning, and a looming robo-calypse threatens to eliminate half of all present-day jobs.
These economic facts combine to ask us: ’Is progress still possible?’ For society, it’s a big question. If we lose faith in our collective story of economic progress, do we become less tolerant of one another and more divisive? The theory is that if we start believing that the pie won’t get any bigger than it is today (and might even start shrinking!), then social solidarity comes under strain. We become hostile to the idea of sharing and more focussed on making sure that we eat our fill first. Sounds a lot like our present-day trade and immigration debates.
It’s Politics, Stupid
Political scientists chart our drift into unfamiliar political territory. In the U.S., spending money now counts as constitutionally protected speech. Too much money in politics means that politicians need the support of big finance or big business or billionaire egos to get re-elected—as much as, or more than, the support of voters. That legislative capture, combined with the offshoring and automation of the old industrial economy, is leading to the postindustrial collapse of union power and labour movements. The consolidation of local media into big, sometimes foreign-owned, conglomerates has led to the vanishing of working-class, street-level issues from the public eye. Is it any wonder that trust in public institutions is plummeting across the advanced democracies, or that China’s alternative model shines brighter by the year to emerging economies and the strongmen who lead them?
These political facts combine to ask us: ‘Does democracy still work?’ Is ‘one person, one vote’ a promise or a lie? We see these doubts being voiced, forcefully, across the advanced democracies.
It’s Social Change, Stupid
Sociologists chronicle our recent expeditions beyond the boundaries of all known social experience. ‘Liberal democracy’ is leading us toward tipping points that challenge our commitment to liberalism—perhaps more seriously than at any time since the French Revolution or the U.S. Civil War. In North America, the aging of immigrant populations of European ancestry, alongside annual inflows plus higher birthrates from today’s non-European migrants, is slowly shifting demographic facts. Dominant racial, ethnic and religious groups are losing their grip on cultural primacy. On cue, culture wars are breaking out, over gay marriage, feminism, religion, guns. The core tenet of liberalism—that, however compelling our tribal instincts may be, rationally we know that we all share in a universal humanity—now sounds naïve in the same societies that once trumpeted it.
These social trends together beg the big question: ‘Who owns the future?’ And we see this question being fought over within every advanced democracy.
It’s Technology, Stupid
Media theorists are mapping our journey into the technological unknown. Our present institutions and habits of democracy developed within a culture of print media. They developed in an ‘Age of Reason’, when truth was no longer jealously guarded by Church and State, but instead was made accessible to every man (not women, back then) with the ability to read. Rational thought is the unique human capacity that separates civilization from the state of nature, and it is the justification for giving you and me a vote, and for protecting the ‘public sphere’ with rights to speak, to publish and to assemble. This idealized view of democracy formed the basis for a system of government that worked, somewhat—at least, better than the alternatives.
But now, with social media, big data and smart algorithms, we have shattered the public sphere into a billion individual shards of glass. No ‘national conversation’ connects us anymore, yet all of us, with our own shard of glass, can poke our neighbor’s eye. Can discourse still be rational in a medium where the audience to the lie can easily outnumber the audience to the truth? Is ‘popular sovereignty’ still possible in a medium that easily admits foreign interference?
In short, ‘Is democratic discourse still viable?’ It’s the unsettling question that lurks inside all our smartphones.
Calling all theorists
Even from this quick-and-dirty sketch of the terrain, some common features show up:
The world still makes sense
Hindsight is always 20/20. Even so, it’s comforting to know that the world (no matter how new or unfamiliar it may seem) still does make sense—once we shift the facts that we pay attention to. Is this a moment of accelerating progress, or of deepening malaise? Since the 1990s—the fall of the Berlin Wall, the collapse of the USSR, the founding of the WTO and China’s joining up with it, the advent of the World Wide Web—the mainstream focussed mainly on the dramatic gains being made: economically, politically, socially. Now the losses and the system stresses loom much larger in everyone’s thinking.
Either way, WE are the cause
An obvious theme running through every causal tale being told by social scientists is that we have done this to ourselves. There is no alien force, no Act of God, no extra-solar asteroid, to blame. We—that is to say, society—is somehow responsible for the gains and the losses. And so it’s no wonder that the ‘elites’ among us (which is to say, anyone who gained, or who stood in a position of power, during this period of change) have become the chief object of popular rage.
‘Change’ is the new axis that divides us
Given that we ourselves are the reason we’ve sailed into this unfamiliar territory, the choice before us is clear. Option One: To reverse course. To revert to the familiar way things were in our idealized memory. Option Two: To burn our ships. To demand of ourselves that we adapt to a new world.
‘Change’ is now the most important concept in our politics. More of it or less of it, forward or back, Liberal or Conservative: this is the debate that now animates society. It is far more relevant right now than the political debates of ‘Left’ vs ‘Right’—even to political parties themselves. In the U.S., for example, the Democratic Party is split between those who still see progressive possibilities in immigration, trade or technological disruption, and those who now want to slow down these trends for the sake of those who have been left behind. Republicans are divided, too. Some still see change as ‘creative destruction’ that generates wealth for those willing to work for it. Others now see change as a threat to a way of life, laying waste to traditional industries, traditional values and traditional communities.
If that’s right—if ‘change’ really has become the main axis of our political differences—then, to face these differences squarely, we’re going to need to make an additional choice. It used to be that ‘Left’ and ‘Liberal’ were one and the same choice, more or less. So, too, with ‘Right’ and ‘Conservative’. Now, they’re distinct. If you choose Left, you still need to choose again: Liberal (Hillary Clinton-esque) or Conservative (Bernie Sanders-ish). If you choose Right, you still need to choose again: Liberal (George Bush-ophile) or Conservative (Donald Trump-ization).
We are, in a sense, back at the beginning. More change or less change: this is the oldest debate in political history. It’s far older than our debates about Left vs Right, which began relatively recently, with the seating arrangements of our Assemblée Nationale, our House of Parliament, our Congress.
That’s frustrating. Surely by now, after several thousand years of civilization, we should be ready to move on to new questions.
But it’s also exciting. A return to our political beginnings is an opportunity to renew and refresh ideals way down at the bedrock of civilization. And it’s an opportunity to reinvent political parties, and political leadership, to face head-on the old question that once again divides us.
Just a short letter this week. I’m doing a bunch of interviews and podcasts in the U.S. at the moment, to coincide with the U.S. paperback release of Age of Discovery (Revised Edition). It’s hard to look at events in the U.S.—ranging from the Florida Parkland school shooting to the Trump Administration’s efforts to deport Dreamers—through a Renaissance lens. And often heartening, too.
(This is one of my favorite conversations so far, with American super-podcaster Scott Jones on his podcast, Give & Take.)
Thank you, all, for the flood of ideas in response to my letter last week. As you’ll recall, I’ve been searching for the best English-language equivalent to the Estonian concept of ‘kratt’, to help us have a clearer public conversation about A.I. in society—in particular, about the rights and responsibilities of soon-to-be-everywhere autonomous agents that will drive cars, buy groceries and manage stock portfolios on our behalf. Suggestions ranged from ‘butlers’ to ‘tin men’, and the idea I liked best came from my friend Ernesto Oyarbide: ‘avatar’.
Popular culture today probably associates the word ‘avatar’ most strongly with James Cameron’s 2009 Hollywood blockbuster by the same name (or, as the director himself called it, ‘Dances With Wolves in space’). I like the word because it meets the three criteria I set forth last week. To review, it:
- Captures the notion of an agent that represents, or is an extension of, my will;
- Omits the notion that the agent could formulate its own goals or agenda against my will; and
- Is instantly familiar, and thus intuitive, to a wide range of people.
The word itself originates with the Hindu notion that the gods can descend (the Sanskrit verb is ava-tara) to the human world by pouring their essence into another form. That original notion captures my #1 and #2 perfectly. As for my #3, the science fiction writer Neal Stephenson popularized the word as far back as 1992 with his bestseller, Snow Crash. In Neal’s book, real people controlled avatars in a virtual-reality world called the Metaverse. Since then, ‘avatars’ have become a common metaphor for ‘user IDs’ in many online communities. And the word will become even more recognizable once James Cameron releases all his Avatar sequels. (According to Vanity Fair, work has already begun on four sequels, to be filmed back-to-back-to-back-to-back through 2018, at a total budget of $1 billion.)
I also like the word because of its magical, mystical connotations. There is a branch of modern philosophy that traces the history of social thought and argues that civilization is due for a revival of magic. The Enlightenment ushered in an Age of Reason. Now, some argue, the pendulum is swinging back toward the spiritual, the mythical. (But that’s going to have to be the subject of a future letter…)
So thank you, everyone, and Ernesto, for ending my word-hunt. Here’s a prediction for you: by 2020, “Avatar law” is going to be a Real Big Thing — a serious branch of legal innovation, and probably a whole industry of punditry and startups as well. (If anyone wants to go further down this rabbit hole with me, let me know.)
The techno-optimists are driving AI forward.
And we, as citizens, are bombarded by the promises and portents of its consequences. AI will destroy our jobs. AI will eliminate drudgery and leave us more time to be creative. AI will solve our information overload. AI will save lives—on the road, in healthcare, on the battlefield. AI will end humanity. It will be our friend, says Bill Gates. It will be our enemy, says Elon Musk.
So which is it?
I’m still mentally unpacking from my trip to Estonia, the week before last. One of my most stimulating conversations that week was with Marten Kaevats, National Digital Advisor to the Prime Minister. Marten is a thirty-something thinker with shocking hair, a rambling, breathless rate of speech, and a knack for explaining difficult concepts using only the objects in his pockets. His job is to help the government of Estonia create the policies that will help build a better society atop digital foundations.
Marten and I had a long chat about AI—by which I mean, I nudged Marten once, snowball-like, at the top of an imaginary hill, and he rolled down it, gaining speed and size all the time, until flipcharts and whiteboard markers were fleeing desperately out of his path.
Here’s what I took away from it.
Fuzzy Language = Fuzzy Thinking = Fuzzy Talk
Marten’s very first sentence on the topic hit me the hardest: ’You cannot get the discussion going if people misunderstand the topic.’
That is our problem, isn’t it? ‘AI’—artificial intelligence—is a phrase from science fiction that has suddenly entered ordinary speech. We read it in headlines. We hear it on the news. It’s on the lips of businesspeople and technologists and academics and politicians around the world. But no one pauses to define it before they use it. They just assume we know what they mean. But I don’t. Science fiction is littered with contradictory visions of AI. Are we talking about Arnold Schwarzenegger’s Terminator? Alex Garland’s Ex Machina? Stanley Kubrik’s HAL in 2001: A Space Odyssey? Ridley Scott’s replicants in Blade Runner? Star Wars’ C-3PO? Star Trek’s Lt. Commander Data?
Our use of the term ‘AI’ in present-day technology doesn’t clear things up much, either. Is it Amazon’s Echo? Apple’s Siri? Elon Musk’s self-driving Tesla? Is it the algorithm that predicts which show I’ll want to watch next on Netflix? Is it the annoying ad for subscription-service men’s razors that seems to follow me around everywhere while I browse the Internet? Is that AI? If so, god help us all…
We don’t have a clear idea of what they’re talking about. So how can society possibly get involved in the conversation—a conversation that, apparently, could decide the fate of humanity?
We’re Confusing Two Separate Conversations
Society needs to have two separate conversations about ‘artificial intelligence’. One conversation has to do with the Terminators and the C-3POs of our imagination. This is what we might call strong AI: self-aware software or machines with the ability to choose their own goals and agendas. Whether they choose to work with us, or against us, is a question that animates much of science fiction—and which we might one day have to face in science-reality. Maybe before the mid-point of this century. Or maybe never. (Some AI experts, like my good friend Robert Elliott Smith, have deep doubts about whether it’ll ever be possible to build artificial consciousness. Consciousness might prove to be a unique property of complex, multi-celled organisms like us.)
The other, more urgent conversation we need to have concerns the kind of AI that we know is possible. Call it weak AI. It’s not capable of having its own goals or agendas, but it can act on our behalf. And it’s smart enough to perform those tasks the same as, or better than, we could do them ourselves. This is Tesla’s autopilot: it can drive my car more safely than I can, but it doesn’t know that it’s ‘driving a car’, nor can it decide it’d rather read a book. This is IBM’s chess-playing Deep Blue, or Google DeepMind’s AlphaGo: they can play strategy games better than the best human, but they do not know that they’re ‘playing a game’, nor could they decide that they’d really rather bake cookies.
Most present-day public discourse on AI confuses these two, very different conversations, so that it’s very difficult to have clear arguments, or reach clear views, on either of them.
A Clearer Conversation (If You Speak Estonian)
Back to my chat two weeks ago with Marten. What makes him such a powerful voice in Estonia on the questions of how technology and society fit together is that he doesn’t have a background in computer science. He began his career as a professional protestor (advocating rights for cyclists), then spent a decade as an architect and urban planner, and only from there began to explore the digital foundations of cities. When Marten talks technology, he draws, not upon the universal language and concepts of programmers, but upon the local language and concepts of his heritage.
Marten and his colleagues in the Estonian government have drawn from local folklore to conduct the conversation that Estonians need to have about ‘weak AI’ in language that every Estonian can understand. So, instead of talking with the public about algorithms and AI, they talk about ‘kratt’.
Every Estonian—even every child—is familiar with the concept of kratt. For them it’s a common, centuries-old folk tale. Take a personal object and some straw to a crossroads in the forest, and the Devil will animate the straw-thing as your personal slave in exchange for a drop of blood. In the old stories, these kratt had to do everything their master ordered them to. Often they were used for fetching things, but also for stealing things on their master’s behalf or for battling other kratt. ‘Kratt’ turns out to be an excellent metaphor to help Estonians—regardless of age or technical literacy—debate deeply the specific opportunities and ethical questions, the new rights and new responsibilities, that they will encounter in the fast-emerging world of weak AI servants.
Already, Estonian policy makers have clarified a lot of the rules these agents will live under. #KrattLaw has become a national conversation, from Twitter to the floor of their parliament, out of which is emerging the world’s first legislation for the legal personhood, liability and taxation of AI.
Is there an equivalent metaphor to help the rest of us do the same? In 1920, the Czech science fiction writer Karel Čapek invented the word ‘robot’ (from the Slavic language word ‘robota’, meaning a forced laborer). At the time—and ever since—it has helped us to imagine, to create and to debate a world in which animated machines serve us.
Now, we need to nuance that concept to imagine and debate a world in which our robots represent us in society and exercise rights and responsibilities on our behalf: as drivers of our cars, as shoppers for our groceries, as traders of our stock portfolios or as security guards for our property.
I haven’t found the perfect metaphor yet; if you do, please, please share it with me. The ideal metaphor would:
- Capture the notion of an agent that represents, or is an extension of, our will;
- Omit the notion that the agent could formulate its own goals or agenda; and
- Be instantly familiar, and thus intuitive, to a wide range of people.
My first thought was a ‘genie’, but that’s not quite right. Yes, a genie is slave to the master of the lamp (1), and yes we’re all familiar with it (3), but it also has its own agenda (to trick the master into setting it free). That will to escape would always mix up our public conversation between ‘weak’ and ‘strong’ AI.
My other thought was a ‘familiar’, which fits the concept of ‘weak AI’ closely. In Western folklore, a familiar (or familiar spirit) is a creature, often a small animal like a cat or a rat, that serves the commands of a witch or wizard (1) and doesn’t have much in the way of its own plans (2). But I doubt enough people are familiar (ba-dum tss) with the idea for it to be of much use in public policy debates—except, perhaps, among Harry Potter fans and other readers of fantasy fiction.
We Can Start Here
I only know that we need this conceptual language. During last month’s stock market collapse, billions of dollars were lost by trading bots that joined in the sell-off. Is anyone to blame? If so, who? Or who will be to blame when—as will eventually happen—a Tesla on autopilot runs over a child? The owner of the algorithm? Its creator? The government, for letting the autopilot drive on the road?
Every week, artificial intelligent agents are generating more and more headlines. Our current laws, policy-making, ethics and intuitions are failing to keep pace.
With new language, we can begin to catch up.
Every week, we’re greeted with a new story about a cyber hack or attack. The 2018 Winter Olympic Games website was hacked during the opening ceremony last Friday night. Last week, it was confirmed that Russian operatives had hacked voter registration databases in multiple US states prior to the 2016 presidential election. Over the last month, a total of several billion dollars’ worth of crypto-currencies have been stolen in multiple cyber-bank heists. The biggest hack in the last year was of Equifax, the US consumer credit scoring company, whose 145.5 million records were stolen—including people’s names, social insurance numbers, drivers licenses, dates of birth and addresses. Globally, almost one billion Internet users were affected by a malware or virus in 2017.
All of which begs the question: How wise is it for us to build a ‘smart’ society—one that increasingly relies upon the digital medium for everything from filing taxes to driving cars?
I was in Estonia last week, escorting a delegation of government ministers from the Persian Gulf state of Oman, to help them find answers. Estonians were forced to ask this question sooner than most of us. Estonia is a small Baltic state of 1.3 million people. It’s a member of NATO. It’s one of the most digitized societies in the world. And it shares a border with Russia. In 2007, the Russian government hit Estonia’s digital infrastructure with a cyber-assault that temporarily shut down the country’s parliament, banks, ministries, newspapers and broadcasters.
Up until that attack, Estonia’s leaders, especially in government, were concentrated on building a digital paradise. And they’ve been succeeding. For most Estonians, calculating and filing one’s personal income taxes each year takes less than two minutes (and this year can be done with a few finger taps on the Tax Office’s Apple Watch app). Ambulance medics can know your medical history and medications before they arrive at the scene of your accident. Firefighters can know how many people are in your burning building (and if you have mobility problems) before they arrive at the scene of an alarm. Students can send their transcripts to a university with a single tap, and it takes less time to open a bank account or register a company than anywhere else in the world.
The government estimates that it has eliminated one full week—per year, per citizen—of time spent accessing government services: filling forms, standing in lines, filing taxes. The increased productivity across the whole economy is enough to fund the country’s entire national defense budget. Other benefits are harder to quantify. While Americans debate whether to add more polling stations or keep polls open later on Election Day, Estonians can vote online anytime (for a week until the polls close) from any device—anywhere in the world.
But the 2007 cyber attack forced Estonia’s leadership to admit that it had been too blasé about securing their digital way of life up to that point.
Safety. Security. Privacy. Sharing. Trust. The digital medium puts all these values in new tension with each other. And those tensions need to be resolved.
Take privacy and sharing. Amidst so many cybercrimes, data privacy has become a public concern. We’re learning not to trust governments and corporations with our data. Perhaps instead of following the Estonian model, we should insist that government only use our personal data for the explicit purpose for which it was collected—and destroy it afterwards.
Understood this way, data privacy stands in opposition to data sharing. Data sharing is the practice of exchanging and aggregating our personal data, for the sake of efficiency or, in this age of algorithms, to discover important patterns to help us do things better.
We can either share our data to make society ‘smarter’. Or we can preserve everyone’s individual privacy.
Dial Back? Or Double Down?
Estonia is trying hard to expose this choice to be a false one. Faced in 2007 with the question of reversing course or charging ahead with its digital agenda, the country’s leadership clarified a core belief: the digital medium is here to stay. A society can no more turn away from the digital medium than Europe could turn away from the print medium 500 years ago.
If that’s right, then the only way out of these new tensions is through. The Estonian argument I heard last week is that data privacy and data sharing are compatible, once the latter is properly understood. Part of our misunderstanding stems from the use of the word ‘sharing.’ This is a misnomer. It suggests: I give you my data, and you give me yours. Estonians do not ‘share data’ in this vague way. Instead, they break ‘sharing’ into two precise ideas: data ownership and data contracts.
For example: One of the most commonly used public databases in Estonia is the population registry: a database that contains every resident’s vital statistics (name, date of birth, gender, etc) and address. Such basic data is useful to almost every public- and private-sector organization, in almost any transaction. But it has only one, legally liable owner: the Ministry of Statistics. Before any other organization can access the data (say, the police or a bank), they must negotiate a contract with the Ministry that specifies their data privileges and responsibilities. Typically, such contracts are for the minimum data needed to satisfy a valid query. The Ministry of Statistics won’t reveal a resident’s full address when a Yes/No answer—‘Is this person a resident, Y/N?’—will suffice.
Every transaction involving my personal data is recorded (Which entity requested what data for what purpose?), and I can access a log of all those transactions online at any time. This transparency helps me to trust that my data isn’t being misused.
It was remarkable to see for myself the daily conveniences that Estonia has built atop this trust foundation. Perhaps most remarkably, all this trusted data exchange has actually increased data privacy for the average resident. As a Canadian in the UK, I’d need to show my passport to a letting agency to rent an apartment—which gives the letting agency far more information about me than they have any business knowing—or storing on their insecure office machines. In Estonia, all the letting agency needs to know is what their digital query to the Immigration Office tells them: Is this person an eligible resident, Y/N?
Estonia is also trying to find the way through hard choices on cyber security. No digital network is 100% secure, is its post-2007 security ethos. Everything is hackable. Therefore, securing a digital society must be about resilience (be the hard target, so that hackers target someone easier) and recoverability (when you get knocked down, how quickly can you get back up?).
Estonia demonstrated its resilience during the May 2017 WannaCry ransomware attack by North Korea, which crippled more than 200,000 computers across 150 countries—but did not affect a single machine in Estonia. And it is demonstrating its commitment to recoverability this month, as it formally opens the world’s first ‘data embassy’ in Luxembourg. (Its data embassies will backup all essential public data—and will be able to take over running public data services if the country’s own servers fall to cyberattack again.)
Hard Choices? Or False Choices?
My week of conversations in Estonia left me with two dominant impressions. The first is that hard choices under an old paradigm can become false choices under a new one. As the news, good and bad, of our digital capabilities and vulnerabilities continue to crowd the headlines, will we have the vision, and the wisdom, to make that distinction?
The other impression is that when it comes to the digital medium, the greatest risk may be to linger half-way between the analog and digital way. Judging by all our daily habits, we are all quite happy to reap the benefits of the digital medium. Are we prepared to adapt to the responsibilities as well?
Remember John Podesta, the 2016 campaign manager for Hillary Clinton whose emails were hacked and posted to Wikileaks? His Gmail password was runner123…
I was going to write today about the annual World Economic Forum in Davos—and I did pen this very brief critique (To stave off revolution, Davos must do something radical. Here it is.).
But then my (and everyone’s) attention swung. The news that everyone here in London was talking about this week, from the Prime Minister’s Office to the pub, was the annual ‘charity dinner’ hosted by the Presidents Club, which brought together 360 male businesspeople and 130 young ‘hostesses’, then proceeded to demonstrate a great deal about the landscape of gender relations in London in 2018.
And about how quickly that landscape is changing. A year ago, did anyone think to cover the 2017 charity dinner of the Presidents Club? (This year’s dinner was the 33rd annual in the Club’s history.) But in the wake of the Weinstein stories, and the #metoo and #timesup movements, suddenly the Financial Times sniffed a significant story and sent a couple of female reporters undercover to blow this scandal wide open.
I wonder what this week will bring?
Brave voyages, all of us,
Some maps are deeper than others
Navigating change is an exercise in self-awareness. If we want to ‘make new maps’ to help us manoeuvre (for U.S. friends, ‘maneuver’) to a new world, step one is to discover what our present maps are. What are the maps we have been navigating by up until now?
Some of our maps are filed more deeply than others. If we imagine a chest in which all our mental and cultural maps are stored, I’m betting that ‘gender relations’ is in the very bottom drawer—that is, so embedded in everyone’s thoughts and behaviors that, until recently, one might never pull it out for study and yet never be accused of making a wrong turn.
The universal condemnation that London’s Presidents Club, Harvey Weinstein and other sexual misconduct cases generate today, contrasted by our apparent tolerance of the same behaviors just one year ago, suggests that we could all benefit from opening that bottom drawer, lifting this map out and putting it under a bright reading light.
As the younger brother to an older sister, I learned from a very young age to respect female authority. Beyond that, this isn’t an area that I’ve studied. So to help me understand our current cultural map of gender roles, I turned to a new book by Mary Beard, Women and Power: A Manifesto. Mary Beard is a household name in the UK and a world-renowned historian based at Cambridge University. She is a professor (the professor, really) on classical Greece and Rome. (And her Tweets are sharp and witty.)
Ancient Greece and Rome are relevant to many immediate challenges (particularly in the West), because much of our culture has been inherited from, and is still influenced by, the classical world. Socrates and Caesar shaped our basic ideas about democracy and tyranny. Aristotle and Marcus Aurelius shaped our ethical intuitions. And, Mary argues, ‘when it comes to silencing women, Western culture has had thousands of year of practice.’
We’ve mapped gender and power together
As her title suggests, Mary’s book is only half about gender. The other half is about power, and the two are inseparable. Since the time of classical Greece and Rome, power has been gendered. In almost all the stories that survive from that time, the female characters make clear the role of women in society. Homer’s Odyssey, about Odysseus’s return home from the Trojan War, begins with a scene between Odysseues, his wife Penelope, and their son Telemachus:
Penelope comes down from her private quarters into the great hall of the palace, to find a bard performing. He is singing about the difficulties the Greek heroes are having in reaching home. She isn’t amused, and in front of everyone she asks him to choose another, happier number. At which point young Telemachus intervenes: “Mother, go back up into your quarters, and take up your own work, the loom and the distaff…speech will be the business of men, all men, and of me most of all; for mine is the power in this household.”
In the early 4th century BC:
Aristophanes devoted a whole comedy to the “hilarious” fantasy that women might take over running the state. Part of the joke was that women couldn’t speak properly in public—or rather, they couldn’t adapt their private speech (which in this case was largely fixated on sex) to the lofty idiom of male politics.
And in Ovid’s Metamorphoses (an epic about people changing shape):
Poor Io (one of Zeus’ mortal lovers) is turned by the god Jupiter into a cow, so she cannot talk but only moo; while the chatty nymph Echo is punished so that her voice is never her own, merely an instrument for repeating the words of others.
In part out of such stories as these, a specific cultural ideal of power and authority was shaped. Public speech had a gender; it was by definition male. Power had a pitch: male, low, ‘profound’. A high-pitched voice was by definition female—‘strident’, ‘whining’ and weak. During the European Renaissance, when many cultural ideals of classical Greece and Rome were reborn, these classical models of authority were likewise reinvigorated. Fresh generations of would-be statesmen started to read the speeches of Cicero, the triumphs of Caesar and the meditations of Marcus Aurelius. Mary isn’t arguing that the classical world was the only influence on our gendered notion of power, but ’classical traditions have provided us with a powerful template for thinking about public speech, and for deciding what counts as good oratory and bad, persuasive or not, and whose speech is to be given space to be heard.’
Forced to choose
Within a culture such as ours, where power and authority is ‘coded’ as male, women have a choice: fit into that structure, or change the structure itself. In the classical world, the only viable option was the former. Publicly outspoken women had to cloak their femininity somehow. (The goddess of war, Athena, dressed in a soldier’s uniform and remained virgin.) Or they restricted their public speech to ‘women’s issues’: the home, children, their husbands or the interests of women.
Mary notes that the former choice has been made by Western women to claim a public voice all the way up to the present day. In 1588, in her Speech to the Troops at Tilbury, Elizabeth I of England told them:
I know I have the body of a weak, feeble woman; but I have the heart and stomach of a king, and of a king of England too…
Margaret Thatcher famously took lessons to lower the pitch of her voice. Angela Merkel and Hillary Clinton wear pantsuits—probably out of choice, convenience and practicality, but also to fit our expectations of what power looks like. Mary also suggests that:
It was the disconnect in our heads between ‘women’ and ‘power’ that made Melissa McCarthy’s parodies of the one-time White House press secretary Sean Spicer on Saturday Night Live so effective. It was said that these annoyed President Trump more than most satires of his regime because, according to sources close to him, “he doesn’t like his people to appear weak.” Decode that, and what it actually means is that he doesn’t like his men to be parodied by/as women.
(If power is gendered as male in our world, then for Trump weakness is also gendered—as female.)
But perhaps now is the moment in our culture when we ‘change the structure itself.’ This, Mary argues, is the prime map-making opportunity of our time: to become critically self-aware of what we expect power to look and sound like, and to redraw those expectations. To re-code power to be gender-inclusive. ‘It is happily the case that there are now more women in what we would all probably agree are “powerful” positions than there were ten, let alone fifty years ago…But my basic premise is that our mental, cultural template for a “powerful person” remains resolutely male.’
A gender-inclusive map of power would, Mary thinks, distinguish ‘power’ from ‘public prestige’ or ‘celebrity.’ It would diminish the notion of power as a noun and think of it more as a verb: less a thing that can be possessed (which implies that others do not possess it) and more an action that might come from anywhere. It would be both individual and collective—recognizing the power of followers alongside the power of leaders.
Or we could go the other way, and, as in the European Renaissance, take this opportunity to reinvigorate classical ideas about gendered power and speech (like this Republican candidate for the U.S. Senate in Missouri, who wants his daughters to be homemakers, not ‘career obsessed banshees’).
Personally, I find Mary’s project more interesting. She opens a fresh dimension—power—to our awakening conversation about gender. We know power when we see it. Let’s get curious about that map. How do we know power when we see it? What signifies power to us? And how might we scramble those signals?
More from Mary Beard
‘Women in Power’ (YouTube, 2017) — Mary’s full public lecture, upon which her book Women & Power is based (73 minutes).
‘The Millennia of #MeToo’ (The New Yorker, 2017) — A review of Mary’s book that then evolves into a discussion of what the electoral contest between Hillary Clinton and Donald Trump revealed about gendered power in the U.S.
‘The Poison of Patriarchy’ (The Guardian, 2017) — All the main ideas from Mary’s book, compressed into a 5-minute read.
Donald Trump’s presidency performs a great service for the world. It is to lay bare our vulnerabilities. He reveals the fragility of institutions once thought to be rock-solid (the Republican Party, the free press, the FBI, NATO, NAFTA…). And he highlights new threats whose urgency many of us hadn’t yet appreciated—like foreign cyber influence in democratic elections, or algorithms that subdivide ‘public discourse’ into a collection of tribal rallies.
The great harm that Donald Trump’s presidency performs is to steal our attention away from so many other threats.
For me, the starkest example of the latter came this past week, when the US Centers for Disease Control announced plans to scale back its Global Health Security initiative, put in place in the aftermath of West Africa’s 2014 Ebola epidemic. Few people noticed that boorish headline, coming as it did the same day as news that the new Trump-appointed head of the CDC, Brenda Fitzgerald, was resigning under scandal for a conflict of interest: after her appointment, she recently invested heavily into Big Tobacco.
The logic for setting up and funding the Global Health Security Agenda (GHSA) was as pure and simple as policy-making can get:
- The next pandemic is coming. We know this because, as we all witness every year, Nature never gives up trying to make a deadlier flu.
- It is far cheaper to prevent a pandemic than to fight one after it breaks out.
- The hot spots where the next pandemic is most likely to emerge are also the countries least able to prevent, detect and respond to outbreaks.
The 2014 Ebola epidemic that hit Guinea, Sierra Leone and Liberia was supposed to be the case that spurred developed-world governments to act upon this obvious logic. Ebola causes massive bleeding throughout the body and kills 50% to 90% of the healthy people it infects. In research labs, it is stored with the same precautions as anthrax and smallpox, and for a brief time in 2014 it roamed free in West African cities linked to the world by air- and seaports. By 2015, over 11,000 people had died in the region. It nearly became a global catastrophe. It could have been isolated to a single village—with proper preventive measures.
To its credit, the US Congress did act in 2014—with a five-year, $600 million funding boost to the CDC to bolster global health security. The new money has seen the CDC train disease detectives and strengthen emergency response in countries where disease risks are greatest. The goal is to stop future outbreaks at their source. (Last year, CDC-trained responders quickly contained just such an outbreak of Ebola in Congo.)
Relative to the risks, $120 million per year is a cheap insurance policy. (The 2014 Ebola outbreak, which didn’t reach US shores, still cost US taxpayers $5.4 billion in federal emergency funding, and tens of millions more in overseas military deployment, municipal prevention efforts and global business disruptions.)
In 2019, that insurance policy runs out. Every indication is that the Trump Administration will not renew it: his budget for 2018 calls for an 18% cut to Health & Human Services, which includes the CDC. It also calls for the CDC’s sister program at USAID—another $72 million budget for global health security—to be eliminated entirely.
With no new money in its future, the CDC has already begun scaling back its preventive programs—in the very same hot spots where, over the past four years, it has prevented outbreaks.
What big questions might this story make us think about? For me, there are three.
First, what time horizon does Donald Trump consider when he decides what to do with his presidency, and what does that mean for the rest of us? A story in The New Yorker about Trump’s speech at Davos two weeks ago said this:
Trump spoke to the only reality he is ordinarily capable of perceiving: right here, right now. As is normal for Trump, his rhetoric implicitly denied the very possibility of a tomorrow in which his Administration’s policies may have consequences beyond an immediate market boost.
When it comes to ‘Making America Great Again’, Donald Trump’s metrics of greatness are so short-term, they are practically ephemeral: the price of the US dollar, the level of the Dow or the NASDAQ, last month’s unemployment figures.
Yes, the short-term matters. Sometimes, it’s all that matters. But often, the long-term matters more. I’d encourage my American cousins, especially in American media, to force Trump to draw the line he sees between his present actions and long-term prosperity. How will his proposed cuts to Health & Human Services help to secure Americans against the next pandemic? How will his healthcare reforms reverse the fall in US life expectancy? How will his tax plan help reinvent US education, industrial relations and social security for a more automated, AI-assisted, gig economy?
Second, what time horizon do we consider? I, personally, spend a lot more time each week laughing with Stephen Colbert about unflattering images of Donald Trump playing tennis than I do encouraging my elected representative to invest more of my tax dollars into pandemic preparedness. The former comes straight to my lap each morning while I munch my Cheerios; the latter seems far, far out of my way. And yet, if and when the next global health crisis does come, that crisis will flip from ‘nowhere on no one’s to-do list’ to ‘the only thing on everyone’s list.’
Third, how on earth might we change our time horizon—or even believe that it’s possible to choose it? Our technology is luring us into a shorter and shorter awareness of ‘now.’ Our electoral and market cycles lure us into short-term decision-making. And in a world full of ‘disruption’, ‘the long term’ is so unknowable that it seems foolish to plan too much around it. Yet many of the biggest threats to our future can only be met with a long-term view.
I don’t know how we solve that paradox. But the first step, I think, is simply to recognize that the flip-side of attention is inattention. Those of us (like me) who have become so interested in the daily ‘fire and fury’ of Trumplandia, must now be less interested in a bunch of other things. That’s the honest price we pay for our fixations. Here in the UK, where Brexit is…well, no one really knows what’s happening right now with Brexit, and that’s one of the big fears growing among long-term civil servants. Whichever way Brexit goes, whatever happens, whole years of government time, attention and salaries will be spent doing something that might have tremendous symbolic importance, but which probably won’t leave the public any better off, in tangible terms. It might, however, make us worse off if, in the future, it turns out we neglected something we shouldn’t have.
Our fixations are costly. We might one day arrive face-to-face with a fresh crisis and wish we had spent more attention elsewhere…
The Davos theme this year is ‘Creating a shared future in a fractured world’. The World Economic Forum is putting on a show of taking seriously the political and social stresses of this moment. The six co-chairs for this year’s conference are all women, led by Christine Lagarde, head of the IMF. And, while the crowd is still dominated by chief executives, political leaders and journalists, the forum is formally a multi-stakeholder get-together and boasts over 500 delegates from civil society, religious organizations and even a handful of unions.
This year’s Davos is a paradox. The global elite are getting together in a Swiss alpine resort to berate themselves—through the voices of Indian prime minister Narendra Modi, Canadian Prime Minister Justin Trudeau and, loudest of all, US President Donald Trump—that they are out of touch with society.
These self-inflicted scoldings are calibrated to soothe elite anxiety. If successful, they will only harm the interests of those in the room.
Elites aren’t nearly anxious enough about how ‘fractured’ the world is, and the precariousness of their own wealth. This moment is a second Renaissance. And a Renaissance is a time of revolution, not reform. Davos attendees have warmed up to Donald Trump over his first year in office: by not starting a trade war with China (yet) and by offering a generous tax cut to business, he has proven himself to be friendly to their interests. But the same disgust with elite aloofness that elected Trump could have put Bernie Sanders into office just as easily. The global elite got lucky. That’s all.
The excluded in society are restless, and empowered. To stay relevant, to stay safe in their Swiss chalets, chief executives at the Davos gathering should react by doing something revolutionary: commit, collectively, only to contract labour from the gig economy if it is unionized.
It’s not a socialist manifesto; it’s a conservative one. Let labor scramble to self-organize and meet this new demand. Unburden business from trying to meet social objectives that confuse investment decisions.
Workplace unions are obsolete—organizing for better factory conditions is irrelevant once we’ve gotten rid of the factory. Organizing for better social conditions is urgent—but isn’t a priority for Davos Man. That’s why the revolution is coming. Unless business brings it first.