Map #22: Catching Up With A.I.

Map #22: Catching Up With A.I.

The techno-optimists are driving AI forward.

And we, as citizens, are bombarded by the promises and portents of its consequences. AI will destroy our jobs. AI will eliminate drudgery and leave us more time to be creative. AI will solve our information overload. AI will save lives—on the road, in healthcare, on the battlefield. AI will end humanity. It will be our friend, says Bill Gates. It will be our enemy, says Elon Musk.

So which is it?

I’m still mentally unpacking from my trip to Estonia, the week before last. One of my most stimulating conversations that week was with Marten Kaevats, National Digital Advisor to the Prime Minister. Marten is a thirty-something thinker with shocking hair, a rambling, breathless rate of speech, and a knack for explaining difficult concepts using only the objects in his pockets. His job is to help the government of Estonia create the policies that will help build a better society atop digital foundations.

Marten and I had a long chat about AI—by which I mean, I nudged Marten once, snowball-like, at the top of an imaginary hill, and he rolled down it, gaining speed and size all the time, until flipcharts and whiteboard markers were fleeing desperately out of his path.

Here’s what I took away from it.


Fuzzy Language = Fuzzy Thinking = Fuzzy Talk

Marten’s very first sentence on the topic hit me the hardest: ’You cannot get the discussion going if people misunderstand the topic.’

That is our problem, isn’t it? ‘AI’—artificial intelligence—is a phrase from science fiction that has suddenly entered ordinary speech. We read it in headlines. We hear it on the news. It’s on the lips of businesspeople and technologists and academics and politicians around the world. But no one pauses to define it before they use it. They just assume we know what they mean. But I don’t. Science fiction is littered with contradictory visions of AI. Are we talking about Arnold Schwarzenegger’s Terminator? Alex Garland’s Ex Machina? Stanley Kubrik’s HAL in 2001: A Space Odyssey? Ridley Scott’s replicants in Blade Runner? Star Wars’ C-3PO? Star Trek’s Lt. Commander Data?

Our use of the term ‘AI’ in present-day technology doesn’t clear things up much, either. Is it Amazon’s Echo? Apple’s Siri? Elon Musk’s self-driving Tesla? Is it the algorithm that predicts which show I’ll want to watch next on Netflix? Is it the annoying ad for subscription-service men’s razors that seems to follow me around everywhere while I browse the Internet? Is that AI? If so, god help us all…

We don’t have a clear idea of what they’re talking about. So how can society possibly get involved in the conversation—a conversation that, apparently, could decide the fate of humanity?


We’re Confusing Two Separate Conversations

Society needs to have two separate conversations about ‘artificial intelligence’. One conversation has to do with the Terminators and the C-3POs of our imagination. This is what we might call strong AI: self-aware software or machines with the ability to choose their own goals and agendas. Whether they choose to work with us, or against us, is a question that animates much of science fiction—and which we might one day have to face in science-reality. Maybe before the mid-point of this century. Or maybe never. (Some AI experts, like my good friend Robert Elliott Smith, have deep doubts about whether it’ll ever be possible to build artificial consciousness. Consciousness might prove to be a unique property of complex, multi-celled organisms like us.)

The other, more urgent conversation we need to have concerns the kind of AI that we know is possible. Call it weak AI. It’s not capable of having its own goals or agendas, but it can act on our behalf. And it’s smart enough to perform those tasks the same as, or better than, we could do them ourselves. This is Tesla’s autopilot: it can drive my car more safely than I can, but it doesn’t know that it’s ‘driving a car’, nor can it decide it’d rather read a book. This is IBM’s chess-playing Deep Blue, or Google DeepMind’s AlphaGo: they can play strategy games better than the best human, but they do not know that they’re ‘playing a game’, nor could they decide that they’d really rather bake cookies.

Most present-day public discourse on AI confuses these two, very different conversations, so that it’s very difficult to have clear arguments, or reach clear views, on either of them.


A Clearer Conversation (If You Speak Estonian)

Back to my chat two weeks ago with Marten. What makes him such a powerful voice in Estonia on the questions of how technology and society fit together is that he doesn’t have a background in computer science. He began his career as a professional protestor (advocating rights for cyclists), then spent a decade as an architect and urban planner, and only from there began to explore the digital foundations of cities. When Marten talks technology, he draws, not upon the universal language and concepts of programmers, but upon the local language and concepts of his heritage.

Marten and his colleagues in the Estonian government have drawn from local folklore to conduct the conversation that Estonians need to have about ‘weak AI’ in language that every Estonian can understand. So, instead of talking with the public about algorithms and AI, they talk about ‘kratt’.

Every Estonian—even every child—is familiar with the concept of kratt. For them it’s a common, centuries-old folk tale. Take a personal object and some straw to a crossroads in the forest, and the Devil will animate the straw-thing as your personal slave in exchange for a drop of blood. In the old stories, these kratt had to do everything their master ordered them to. Often they were used for fetching things, but also for stealing things on their master’s behalf or for battling other kratt. ‘Kratt’ turns out to be an excellent metaphor to help Estonians—regardless of age or technical literacy—debate deeply the specific opportunities and ethical questions, the new rights and new responsibilities, that they will encounter in the fast-emerging world of weak AI servants.

Already, Estonian policy makers have clarified a lot of the rules these agents will live under. #KrattLaw has become a national conversation, from Twitter to the floor of their parliament, out of which is emerging the world’s first legislation for the legal personhood, liability and taxation of AI.


Translating ‘Kratt’?

Is there an equivalent metaphor to help the rest of us do the same? In 1920, the Czech science fiction writer Karel Čapek invented the word ‘robot’ (from the Slavic language word ‘robota’, meaning a forced laborer). At the time—and ever since—it has helped us to imagine, to create and to debate a world in which animated machines serve us.

Now, we need to nuance that concept to imagine and debate a world in which our robots represent us in society and exercise rights and responsibilities on our behalf: as drivers of our cars, as shoppers for our groceries, as traders of our stock portfolios or as security guards for our property.

I haven’t found the perfect metaphor yet; if you do, please, please share it with me. The ideal metaphor would:

  1. Capture the notion of an agent that represents, or is an extension of, our will;
  2. Omit the notion that the agent could formulate its own goals or agenda; and
  3. Be instantly familiar, and thus intuitive, to a wide range of people.

My first thought was a ‘genie’, but that’s not quite right. Yes, a genie is slave to the master of the lamp (1), and yes we’re all familiar with it (3), but it also has its own agenda (to trick the master into setting it free). That will to escape would always mix up our public conversation between ‘weak’ and ‘strong’ AI.

My other thought was a ‘familiar’, which fits the concept of ‘weak AI’ closely. In Western folklore, a familiar (or familiar spirit) is a creature, often a small animal like a cat or a rat, that serves the commands of a witch or wizard (1) and doesn’t have much in the way of its own plans (2). But I doubt enough people are familiar (ba-dum tss) with the idea for it to be of much use in public policy debates—except, perhaps, among Harry Potter fans and other readers of fantasy fiction.


We Can Start Here

I only know that we need this conceptual language. During last month’s stock market collapse, billions of dollars were lost by trading bots that joined in the sell-off. Is anyone to blame? If so, who? Or who will be to blame when—as will eventually happen—a Tesla on autopilot runs over a child? The owner of the algorithm? Its creator? The government, for letting the autopilot drive on the road?

Every week, artificial intelligent agents are generating more and more headlines. Our current laws, policy-making, ethics and intuitions are failing to keep pace.

With new language, we can begin to catch up.

Map #21: Hard Choices Or False Choices In Digital Paradise

Map #21: Hard Choices Or False Choices In Digital Paradise

Every week, we’re greeted with a new story about a cyber hack or attack. The 2018 Winter Olympic Games website was hacked during the opening ceremony last Friday night. Last week, it was confirmed that Russian operatives had hacked voter registration databases in multiple US states prior to the 2016 presidential election. Over the last month, a total of several billion dollars’ worth of crypto-currencies have been stolen in multiple cyber-bank heists. The biggest hack in the last year was of Equifax, the US consumer credit scoring company, whose 145.5 million records were stolen—including people’s names, social insurance numbers, drivers licenses, dates of birth and addresses. Globally, almost one billion Internet users were affected by a malware or virus in 2017.

All of which begs the question: How wise is it for us to build a ‘smart’ society—one that increasingly relies upon the digital medium for everything from filing taxes to driving cars?

I was in Estonia last week, escorting a delegation of government ministers from the Persian Gulf state of Oman, to help them find answers. Estonians were forced to ask this question sooner than most of us. Estonia is a small Baltic state of 1.3 million people. It’s a member of NATO. It’s one of the most digitized societies in the world. And it shares a border with Russia. In 2007, the Russian government hit Estonia’s digital infrastructure with a cyber-assault that temporarily shut down the country’s parliament, banks, ministries, newspapers and broadcasters.

Up until that attack, Estonia’s leaders, especially in government, were concentrated on building a digital paradise. And they’ve been succeeding. For most Estonians, calculating and filing one’s personal income taxes each year takes less than two minutes (and this year can be done with a few finger taps on the Tax Office’s Apple Watch app). Ambulance medics can know your medical history and medications before they arrive at the scene of your accident. Firefighters can know how many people are in your burning building (and if you have mobility problems) before they arrive at the scene of an alarm. Students can send their transcripts to a university with a single tap, and it takes less time to open a bank account or register a company than anywhere else in the world.

The government estimates that it has eliminated one full week—per year, per citizen—of time spent accessing government services: filling forms, standing in lines, filing taxes. The increased productivity across the whole economy is enough to fund the country’s entire national defense budget. Other benefits are harder to quantify. While Americans debate whether to add more polling stations or keep polls open later on Election Day, Estonians can vote online anytime (for a week until the polls close) from any device—anywhere in the world.

Paradise Lost?

But the 2007 cyber attack forced Estonia’s leadership to admit that it had been too blasé about securing their digital way of life up to that point.

Safety. Security. Privacy. Sharing. Trust. The digital medium puts all these values in new tension with each other. And those tensions need to be resolved.

Take privacy and sharing. Amidst so many cybercrimes, data privacy has become a public concern. We’re learning not to trust governments and corporations with our data. Perhaps instead of following the Estonian model, we should insist that government only use our personal data for the explicit purpose for which it was collected—and destroy it afterwards.

Understood this way, data privacy stands in opposition to data sharing. Data sharing is the practice of exchanging and aggregating our personal data, for the sake of efficiency or, in this age of algorithms, to discover important patterns to help us do things better.

We can either share our data to make society ‘smarter’. Or we can preserve everyone’s individual privacy.


Dial Back? Or Double Down?

Estonia is trying hard to expose this choice to be a false one. Faced in 2007 with the question of reversing course or charging ahead with its digital agenda, the country’s leadership clarified a core belief: the digital medium is here to stay. A society can no more turn away from the digital medium than Europe could turn away from the print medium 500 years ago.

If that’s right, then the only way out of these new tensions is through. The Estonian argument I heard last week is that data privacy and data sharing are compatible, once the latter is properly understood. Part of our misunderstanding stems from the use of the word ‘sharing.’ This is a misnomer. It suggests: I give you my data, and you give me yours. Estonians do not ‘share data’ in this vague way. Instead, they break ‘sharing’ into two precise ideas: data ownership and data contracts.

For example: One of the most commonly used public databases in Estonia is the population registry: a database that contains every resident’s vital statistics (name, date of birth, gender, etc) and address. Such basic data is useful to almost every public- and private-sector organization, in almost any transaction. But it has only one, legally liable owner: the Ministry of Statistics. Before any other organization can access the data (say, the police or a bank), they must negotiate a contract with the Ministry that specifies their data privileges and responsibilities. Typically, such contracts are for the minimum data needed to satisfy a valid query. The Ministry of Statistics won’t reveal a resident’s full address when a Yes/No answer—‘Is this person a resident, Y/N?’—will suffice.

Every transaction involving my personal data is recorded (Which entity requested what data for what purpose?), and I can access a log of all those transactions online at any time. This transparency helps me to trust that my data isn’t being misused.

It was remarkable to see for myself the daily conveniences that Estonia has built atop this trust foundation. Perhaps most remarkably, all this trusted data exchange has actually increased data privacy for the average resident. As a Canadian in the UK, I’d need to show my passport to a letting agency to rent an apartment—which gives the letting agency far more information about me than they have any business knowing—or storing on their insecure office machines. In Estonia, all the letting agency needs to know is what their digital query to the Immigration Office tells them: Is this person an eligible resident, Y/N?

Estonia is also trying to find the way through hard choices on cyber security. No digital network is 100% secure, is its post-2007 security ethos. Everything is hackable. Therefore, securing a digital society must be about resilience (be the hard target, so that hackers target someone easier) and recoverability (when you get knocked down, how quickly can you get back up?).

Estonia demonstrated its resilience during the May 2017 WannaCry ransomware attack by North Korea, which crippled more than 200,000 computers across 150 countries—but did not affect a single machine in Estonia. And it is demonstrating its commitment to recoverability this month, as it formally opens the world’s first ‘data embassy’ in Luxembourg. (Its data embassies will backup all essential public data—and will be able to take over running public data services if the country’s own servers fall to cyberattack again.)


Hard Choices? Or False Choices?

My week of conversations in Estonia left me with two dominant impressions. The first is that hard choices under an old paradigm can become false choices under a new one. As the news, good and bad, of our digital capabilities and vulnerabilities continue to crowd the headlines, will we have the vision, and the wisdom, to make that distinction?

The other impression is that when it comes to the digital medium, the greatest risk may be to linger half-way between the analog and digital way. Judging by all our daily habits, we are all quite happy to reap the benefits of the digital medium. Are we prepared to adapt to the responsibilities as well?

Remember John Podesta, the 2016 campaign manager for Hillary Clinton whose emails were hacked and posted to Wikileaks? His Gmail password was runner123

Map #20: Women & Power

Map #20: Women & Power

I was going to write today about the annual World Economic Forum in Davos—and I did pen this very brief critique (To stave off revolution, Davos must do something radical. Here it is.).

But then my (and everyone’s) attention swung. The news that everyone here in London was talking about this week, from the Prime Minister’s Office to the pub, was the annual ‘charity dinner’ hosted by the Presidents Club, which brought together 360 male businesspeople and 130 young ‘hostesses’, then proceeded to demonstrate a great deal about the landscape of gender relations in London in 2018.

And about how quickly that landscape is changing. A year ago, did anyone think to cover the 2017 charity dinner of the Presidents Club? (This year’s dinner was the 33rd annual in the Club’s history.) But in the wake of the Weinstein stories, and the #metoo and #timesup movements, suddenly the Financial Times sniffed a significant story and sent a couple of female reporters undercover to blow this scandal wide open.

I wonder what this week will bring?

Brave voyages, all of us,

Chris


Some maps are deeper than others

Navigating change is an exercise in self-awareness. If we want to ‘make new maps’ to help us manoeuvre (for U.S. friends, ‘maneuver’) to a new world, step one is to discover what our present maps are. What are the maps we have been navigating by up until now?

Some of our maps are filed more deeply than others. If we imagine a chest in which all our mental and cultural maps are stored, I’m betting that ‘gender relations’ is in the very bottom drawer—that is, so embedded in everyone’s thoughts and behaviors that, until recently, one might never pull it out for study and yet never be accused of making a wrong turn.

The universal condemnation that London’s Presidents Club, Harvey Weinstein and other sexual misconduct cases generate today, contrasted by our apparent tolerance of the same behaviors just one year ago, suggests that we could all benefit from opening that bottom drawer, lifting this map out and putting it under a bright reading light.

As the younger brother to an older sister, I learned from a very young age to respect female authority. Beyond that, this isn’t an area that I’ve studied. So to help me understand our current cultural map of gender roles, I turned to a new book by Mary Beard, Women and Power: A Manifesto. Mary Beard is a household name in the UK and a world-renowned historian based at Cambridge University. She is a professor (the professor, really) on classical Greece and Rome. (And her Tweets are sharp and witty.)

Ancient Greece and Rome are relevant to many immediate challenges (particularly in the West), because much of our culture has been inherited from, and is still influenced by, the classical world. Socrates and Caesar shaped our basic ideas about democracy and tyranny. Aristotle and Marcus Aurelius shaped our ethical intuitions. And, Mary argues, ‘when it comes to silencing women, Western culture has had thousands of year of practice.’


We’ve mapped gender and power together

As her title suggests, Mary’s book is only half about gender. The other half is about power, and the two are inseparable. Since the time of classical Greece and Rome, power has been gendered. In almost all the stories that survive from that time, the female characters make clear the role of women in society. Homer’s Odyssey, about Odysseus’s return home from the Trojan War, begins with a scene between Odysseues, his wife Penelope, and their son Telemachus:

Penelope comes down from her private quarters into the great hall of the palace, to find a bard performing. He is singing about the difficulties the Greek heroes are having in reaching home. She isn’t amused, and in front of everyone she asks him to choose another, happier number. At which point young Telemachus intervenes: “Mother, go back up into your quarters, and take up your own work, the loom and the distaff…speech will be the business of men, all men, and of me most of all; for mine is the power in this household.”

In the early 4th century BC:

Aristophanes devoted a whole comedy to the “hilarious” fantasy that women might take over running the state. Part of the joke was that women couldn’t speak properly in public—or rather, they couldn’t adapt their private speech (which in this case was largely fixated on sex) to the lofty idiom of male politics.

And in Ovid’s Metamorphoses (an epic about people changing shape):

Poor Io (one of Zeus’ mortal lovers) is turned by the god Jupiter into a cow, so she cannot talk but only moo; while the chatty nymph Echo is punished so that her voice is never her own, merely an instrument for repeating the words of others.

In part out of such stories as these, a specific cultural ideal of power and authority was shaped. Public speech had a gender; it was by definition male. Power had a pitch: male, low, ‘profound’. A high-pitched voice was by definition female—‘strident’, ‘whining’ and weak. During the European Renaissance, when many cultural ideals of classical Greece and Rome were reborn, these classical models of authority were likewise reinvigorated. Fresh generations of would-be statesmen started to read the speeches of Cicero, the triumphs of Caesar and the meditations of Marcus Aurelius. Mary isn’t arguing that the classical world was the only influence on our gendered notion of power, but ’classical traditions have provided us with a powerful template for thinking about public speech, and for deciding what counts as good oratory and bad, persuasive or not, and whose speech is to be given space to be heard.’


Forced to choose

Within a culture such as ours, where power and authority is ‘coded’ as male, women have a choice: fit into that structure, or change the structure itself. In the classical world, the only viable option was the former. Publicly outspoken women had to cloak their femininity somehow. (The goddess of war, Athena, dressed in a soldier’s uniform and remained virgin.) Or they restricted their public speech to ‘women’s issues’: the home, children, their husbands or the interests of women.

Mary notes that the former choice has been made by Western women to claim a public voice all the way up to the present day. In 1588, in her Speech to the Troops at Tilbury, Elizabeth I of England told them:

I know I have the body of a weak, feeble woman; but I have the heart and stomach of a king, and of a king of England too…

Margaret Thatcher famously took lessons to lower the pitch of her voice. Angela Merkel and Hillary Clinton wear pantsuits—probably out of choice, convenience and practicality, but also to fit our expectations of what power looks like. Mary also suggests that:

It was the disconnect in our heads between ‘women’ and ‘power’ that made Melissa McCarthy’s parodies of the one-time White House press secretary Sean Spicer on Saturday Night Live so effective. It was said that these annoyed President Trump more than most satires of his regime because, according to sources close to him, “he doesn’t like his people to appear weak.” Decode that, and what it actually means is that he doesn’t like his men to be parodied by/as women.

(If power is gendered as male in our world, then for Trump weakness is also gendered—as female.)


Gender-inclusive power?

But perhaps now is the moment in our culture when we ‘change the structure itself.’ This, Mary argues, is the prime map-making opportunity of our time: to become critically self-aware of what we expect power to look and sound like, and to redraw those expectations. To re-code power to be gender-inclusive. ‘It is happily the case that there are now more women in what we would all probably agree are “powerful” positions than there were ten, let alone fifty years ago…But my basic premise is that our mental, cultural template for a “powerful person” remains resolutely male.’

A gender-inclusive map of power would, Mary thinks, distinguish ‘power’ from ‘public prestige’ or ‘celebrity.’ It would diminish the notion of power as a noun and think of it more as a verb: less a thing that can be possessed (which implies that others do not possess it) and more an action that might come from anywhere. It would be both individual and collective—recognizing the power of followers alongside the power of leaders.

Or we could go the other way, and, as in the European Renaissance, take this opportunity to reinvigorate classical ideas about gendered power and speech (like this Republican candidate for the U.S. Senate in Missouri, who wants his daughters to be homemakers, not ‘career obsessed banshees’).

Personally, I find Mary’s project more interesting. She opens a fresh dimension—power—to our awakening conversation about gender. We know power when we see it. Let’s get curious about that map. How do we know power when we see it? What signifies power to us? And how might we scramble those signals?


More from Mary Beard

Women in Power’ (YouTube, 2017) — Mary’s full public lecture, upon which her book Women & Power is based (73 minutes).

The Millennia of #MeToo’ (The New Yorker, 2017) — A review of Mary’s book that then evolves into a discussion of what the electoral contest between Hillary Clinton and Donald Trump revealed about gendered power in the U.S.

The Poison of Patriarchy’ (The Guardian, 2017) — All the main ideas from Mary’s book, compressed into a 5-minute read.

Map #19: Trump Fiddles. We Watch. Who’s To Blame If Rome Burns?

Map #19: Trump Fiddles. We Watch. Who’s To Blame If Rome Burns?

Donald Trump’s presidency performs a great service for the world. It is to lay bare our vulnerabilities. He reveals the fragility of institutions once thought to be rock-solid (the Republican Party, the free press, the FBI, NATO, NAFTA…). And he highlights new threats whose urgency many of us hadn’t yet appreciated—like foreign cyber influence in democratic elections, or algorithms that subdivide ‘public discourse’ into a collection of tribal rallies.

The great harm that Donald Trump’s presidency performs is to steal our attention away from so many other threats.

For me, the starkest example of the latter came this past week, when the US Centers for Disease Control announced plans to scale back its Global Health Security initiative, put in place in the aftermath of West Africa’s 2014 Ebola epidemic. Few people noticed that boorish headline, coming as it did the same day as news that the new Trump-appointed head of the CDC, Brenda Fitzgerald, was resigning under scandal for a conflict of interest: after her appointment, she recently invested heavily into Big Tobacco.

The logic for setting up and funding the Global Health Security Agenda (GHSA) was as pure and simple as policy-making can get:

  • The next pandemic is coming. We know this because, as we all witness every year, Nature never gives up trying to make a deadlier flu.
  • It is far cheaper to prevent a pandemic than to fight one after it breaks out.
  • The hot spots where the next pandemic is most likely to emerge are also the countries least able to prevent, detect and respond to outbreaks.

The 2014 Ebola epidemic that hit Guinea, Sierra Leone and Liberia was supposed to be the case that spurred developed-world governments to act upon this obvious logic. Ebola causes massive bleeding throughout the body and kills 50% to 90% of the healthy people it infects. In research labs, it is stored with the same precautions as anthrax and smallpox, and for a brief time in 2014 it roamed free in West African cities linked to the world by air- and seaports. By 2015, over 11,000 people had died in the region. It nearly became a global catastrophe. It could have been isolated to a single village—with proper preventive measures.

To its credit, the US Congress did act in 2014—with a five-year, $600 million funding boost to the CDC to bolster global health security. The new money has seen the CDC train disease detectives and strengthen emergency response in countries where disease risks are greatest. The goal is to stop future outbreaks at their source. (Last year, CDC-trained responders quickly contained just such an outbreak of Ebola in Congo.)

Relative to the risks, $120 million per year is a cheap insurance policy. (The 2014 Ebola outbreak, which didn’t reach US shores, still cost US taxpayers $5.4 billion in federal emergency funding, and tens of millions more in overseas military deployment, municipal prevention efforts and global business disruptions.)

In 2019, that insurance policy runs out. Every indication is that the Trump Administration will not renew it: his budget for 2018 calls for an 18% cut to Health & Human Services, which includes the CDC. It also calls for the CDC’s sister program at USAID—another $72 million budget for global health security—to be eliminated entirely.

With no new money in its future, the CDC has already begun scaling back its preventive programs—in the very same hot spots where, over the past four years, it has prevented outbreaks.

So what?

What big questions might this story make us think about? For me, there are three.

First, what time horizon does Donald Trump consider when he decides what to do with his presidency, and what does that mean for the rest of us? A story in The New Yorker about Trump’s speech at Davos two weeks ago said this:

Trump spoke to the only reality he is ordinarily capable of perceiving: right here, right now. As is normal for Trump, his rhetoric implicitly denied the very possibility of a tomorrow in which his Administration’s policies may have consequences beyond an immediate market boost.

When it comes to ‘Making America Great Again’, Donald Trump’s metrics of greatness are so short-term, they are practically ephemeral: the price of the US dollar, the level of the Dow or the NASDAQ, last month’s unemployment figures.

Yes, the short-term matters. Sometimes, it’s all that matters. But often, the long-term matters more. I’d encourage my American cousins, especially in American media, to force Trump to draw the line he sees between his present actions and long-term prosperity. How will his proposed cuts to Health & Human Services help to secure Americans against the next pandemic? How will his healthcare reforms reverse the fall in US life expectancy? How will his tax plan help reinvent US education, industrial relations and social security for a more automated, AI-assisted, gig economy?

Second, what time horizon do we consider? I, personally, spend a lot more time each week laughing with Stephen Colbert about unflattering images of Donald Trump playing tennis than I do encouraging my elected representative to invest more of my tax dollars into pandemic preparedness. The former comes straight to my lap each morning while I munch my Cheerios; the latter seems far, far out of my way. And yet, if and when the next global health crisis does come, that crisis will flip from ‘nowhere on no one’s to-do list’ to ‘the only thing on everyone’s list.’

Third, how on earth might we change our time horizon—or even believe that it’s possible to choose it? Our technology is luring us into a shorter and shorter awareness of ‘now.’ Our electoral and market cycles lure us into short-term decision-making. And in a world full of ‘disruption’, ‘the long term’ is so unknowable that it seems foolish to plan too much around it. Yet many of the biggest threats to our future can only be met with a long-term view.

I don’t know how we solve that paradox. But the first step, I think, is simply to recognize that the flip-side of attention is inattention. Those of us (like me) who have become so interested in the daily ‘fire and fury’ of Trumplandia, must now be less interested in a bunch of other things. That’s the honest price we pay for our fixations. Here in the UK, where Brexit is…well, no one really knows what’s happening right now with Brexit, and that’s one of the big fears growing among long-term civil servants. Whichever way Brexit goes, whatever happens, whole years of government time, attention and salaries will be spent doing something that might have tremendous symbolic importance, but which probably won’t leave the public any better off, in tangible terms. It might, however, make us worse off if, in the future, it turns out we neglected something we shouldn’t have.

Our fixations are costly. We might one day arrive face-to-face with a fresh crisis and wish we had spent more attention elsewhere…

Map #18: To stave off revolution, Davos must do something radical. Here it is.

Map #18: To stave off revolution, Davos must do something radical. Here it is.

The Davos theme this year is ‘Creating a shared future in a fractured world’. The World Economic Forum is putting on a show of taking seriously the political and social stresses of this moment. The six co-chairs for this year’s conference are all women, led by Christine Lagarde, head of the IMF. And, while the crowd is still dominated by chief executives, political leaders and journalists, the forum is formally a multi-stakeholder get-together and boasts over 500 delegates from civil society, religious organizations and even a handful of unions.

This year’s Davos is a paradox. The global elite are getting together in a Swiss alpine resort to berate themselves—through the voices of Indian prime minister Narendra Modi, Canadian Prime Minister Justin Trudeau and, loudest of all, US President Donald Trump—that they are out of touch with society.

These self-inflicted scoldings are calibrated to soothe elite anxiety. If successful, they will only harm the interests of those in the room.

Elites aren’t nearly anxious enough about how ‘fractured’ the world is, and the precariousness of their own wealth. This moment is a second Renaissance. And a Renaissance is a time of revolution, not reform. Davos attendees have warmed up to Donald Trump over his first year in office: by not starting a trade war with China (yet) and by offering a generous tax cut to business, he has proven himself to be friendly to their interests. But the same disgust with elite aloofness that elected Trump could have put Bernie Sanders into office just as easily. The global elite got lucky. That’s all.

The excluded in society are restless, and empowered. To stay relevant, to stay safe in their Swiss chalets, chief executives at the Davos gathering should react by doing something revolutionary: commit, collectively, only to contract labour from the gig economy if it is unionized.

It’s not a socialist manifesto; it’s a conservative one. Let labor scramble to self-organize and meet this new demand. Unburden business from trying to meet social objectives that confuse investment decisions.

Workplace unions are obsolete—organizing for better factory conditions is irrelevant once we’ve gotten rid of the factory. Organizing for better social conditions is urgent—but isn’t a priority for Davos Man. That’s why the revolution is coming. Unless business brings it first.

Liquid Modernity

Liquid Modernity

I’ve been following a lot of sociologists lately. ‘How can we best make sense of the moment we’re in?’, is the question I’m asking right now, and the whole field of sociology is always trying to answer it. One of the most thought-provoking book in the field that I’ve found so far is Liquid Modernity, by Zygmunt Bauman.

Zygmunt was a Polish-born sociologist (1925-2017) and one of the world’s eminent social theorists. Born in Poland, he escaped to the Soviet Union when the Nazis invaded, then returned to Poland after WWII as a committed Communist and lecturer at the University of Warsaw. In 1968, he was kicked out of Poland for being too critical of the country’s Communist regime and moved to the UK. He spent the rest of his career—and life—in Leeds. He died just a year ago. (If he were still living, I’d be knocking on his door right now.) His big ideas—which focus on questions of modernity, consumerism and globalization—reflect decades lived on both sides of the 20th century’s ideological divide.

As a sociologist, Zygmunt passionately believed that by asking questions about our own society, we become more free. ‘An autonomous society, a truly democratic society, is a society which questions everything that is pre-given and by the same token liberates the creation of new meanings. In such a society, all individuals are free to create for their lives the meanings they will (and can).’

On the flipside: ‘Society is ill if it stops questioning itself.’ We become enslaved to the narratives being manufactured all around us, and we lose touch with our own subjective experiences.

Self-questioning our own society is hard work: ‘We need to pierce the walls of the obvious and self-evident, of the prevailing ideas of the day whose commonality is mistaken for proof that they make sense.’

And yet, we must try, because: ‘Whatever safety democracy and individuality may muster depends not on fighting the uncertainty of the human condition, but on recognizing it and facing its consequences point-blank.’

The prevailing ideas of our day box us in. They wall in our awareness of what’s happening in our own society. They limit our sight to the surfaces that have been painted for us. But if we can see the box itself, then maybe we can cut ourselves a window—or even a door…

I hope you’ll forgive me for going a bit long this week. I don’t agree with everything Zygmunt writes, but whether we agree with him is irrelevant. By asking tough questions, he helps us to ‘create new meanings’. And I wanted to put that possibility in your inbox.


Trapped inside liquid modernity

Zygmunt labels the box we’re now trapped inside ‘liquid modernity’. He contrasts it with the very different box of ideas we used to be trapped in, which all had to do with ’solidity’.

What’s happening to us today—why everything feels so strange—is that we are struggling to shift our thinking, values and identity, from a solid to a liquid state.

‘Flexibility has replaced solidity as the ideal condition to be pursued of things and affairs.’ The very same day I read that sentence, I received this email from the McKinsey Quarterly:

And I saw, not the article, but the box Zygmunt is trying to make me see.

Liquid individuals (or, Making sense of our own selves)

In our personal lives, we now live this shift from solid to liquid daily. In solid modernity, the world of Henry Ford factories and automotive unions, ‘the task confronting free individuals was to use their freedom to find the appropriate niche and to settle there through conformity.’ (If you think about it, our systems of compulsory education were designed to help us achieve that goal, that life.)

But today, ‘such patterns, codes and rules to which one could conform…are in increasingly short supply.’ Where once workers unionized and rallied together to humanize labour against dehumanizing conformity, now we struggle with the absence of stable employment structures. These days, ‘patterns to which we could conform are no longer “given”, let alone “self-evident”; there are just too many of them, clashing with one another and contradicting one another.’

Today, the burden of pattern-weaving (and the responsibility for getting the pattern wrong) falls primarily on each individual’s shoulders. ‘Under the new circumstances, the odds are that most of human life—and most of human lives—will be spent agonizing about the choice of goals, rather than finding the means to the ends which do not call for reflection.’

What should I do?’ has come to dominate our actions. There are painfully more possibilities than any individual life, however long, adventurous or industrious, can attempt to explore. The most haunting, insomnia-causing question has become, ‘Have I used my means to the best advantage?’

One of the consequences of this haunting uncertainty is that ‘shopping’ has extended beyond buying stuff to become the very activity of life itself. ‘Shopping is no longer just about food, shoes, cars or furniture. The avid, never-ending search for new and improved examples and recipes for life is also a variety of shopping. We shop for the skills needed to earn our living, and for the ways to learn them best; for ways of making the new friends we want; for ways of drawing attention and ways to hide from scrutiny; for the means to squeeze the most satisfaction out of love and for the best ways to make money…The competence most needed in a world of infinite ends is that of skilful and indefatigable shopper.’

Liquid capitalism (Making sense of Davos)

In his critiques of capitalism, Zygmunt’s bias, built up over decades as a committed communist, reads plainly. But it doesn’t mean his analysis is wrong. And given that this week is the annual World Economic Forum in Davos, Switzerland, I think now is a good moment for all of us to ask some tough questions of our economic modernity.

’In the fluid stage of modernity,’ Zygmunt wrote, ‘the settled majority is ruled by the nomadic and extraterritorial elite.’ (Apt, eh?)

His reasoning is this: In a solid world, the power of capital over labor was demonstrated by the ability to fix in place, to control. In the solid factories of Henry Ford, power was wielded by bolting human labor to machines on an assembly line.

But that power came with some responsibility, too. In the world of factories, human labor came with a human body. ‘One could employ human labor only together with the rest of the laborers’ bodies…That requirement brought capital and labor face-to-face in the factory and kept them, for better or worse, in each other’s company.’ Factory owners had to supply some light, some food, some safety at least.

That’s no longer the case. In our liquid, digital economy, labor no longer ties down capital. While labor still depends on capital to supply the tools to be productive, capital itself is now weightless, free of spatial confinement. Now, the power of capital is to escape, to avoid and evade, to reject territorial confinement, to reject the inconvenience and responsibility of building and maintaining a labor force. ‘Brief contracts replace lasting engagements. One does not plant a citrus-tree grove to squeeze a lemon.’

In liquid modernity, capital travels hopefully (with carry-on luggage only), counting on brief profitable adventures and confident that there will be no shortage of them. Labor itself is now dividing into those who can do the same, and those who cannot:

‘This has become the principal factor of present-day inequality…The game of domination in the era of liquid modernity is not played between the bigger and the smaller, but between the quicker and the slower…People who move and act faster are now the people who rule…It is the people who cannot move as quickly, and especially, those who cannot leave their place at all, who are ruled…Some of the world’s residents are on the move; for the rest it is the world itself that refuses to stand still.’

Where once we valued durability, now we value flexibility. Transience. Because that which cannot easily bend will instead snap.

Liquid society (Making sense of our Trump obsession)

Remember George Orwell’s Nineteen Eighty-Four? In solid modernity, we feared the monolithic Big Brother. We feared the totalitarian state that would lock all of our private freedoms into the iron grip of public routines. The private sphere would be devoured by the public. Now, we fear the reverse: that the unfettered freedom of our private action is eroding, devouring, the once solid-seeming institutions of the public sphere.

The task now is to defend the vanishing public realm.

In the era of ‘solid modernity’, the metaphor for society was that of ‘citizens in a shared household’. The household had norms, habits and rules. And politics was about building awareness of, and tweaking, those features of household life.

But now, it’s like we’re all ‘individuals in a caravan park’. We come and go, according to our own itinerary and time schedule. We all bring to the park our own homes, equipped with all the stuff we need for our stay—which we intend to be short. There’s a site manager, from whom what we want most is to be left alone and not interfered with. We all pay our rental fee, and since we pay, we also demand. We want our promised services—electric sockets and water taps, and not to be disturbed by the other campers—and otherwise want to be free to do our own thing. On occasion, we clamor for better service from the manager. Sometimes we get it. But it doesn’t occur to us to challenge the managerial philosophy of the site, much less to take over the responsibility for running the place. We may, at the utmost, make a mental note never to use the site again and not to recommend it to our friends. But when we leave, the site remains much the same as it was before our arrival.

This shift, from ‘shared household’ to ‘caravan park’, makes for a profoundly different public discourse. Rather than a space to debate our collective problem—how to build the good or just society—the public sphere has become dominated by the private problems of public figures. To fear Big Brother was to fear the few watching the many. ‘But now the tables have been reversed. It is now the many who watch the few.’ (Or the one…Donald Trump)

As the public realm dwindles down to public commentary on private virtues and vices, the collective questions fade from public discourse, until we reach the point we are at today, where ‘politicians offer us their sentiments, rather than their acts, for our consumption’ and we, as spectators, do not expect much more from our politicians than a good spectacle.

Liquid identity (or, Making sense of populism)

Immigration is a good thing. ‘A mixing of cultural inspirations is a source of enrichment and an engine of creativity.’ At the same time, ‘only a thin line separates enrichment from a loss of cultural identity.’

Faced with the fluidity of this modern moment, it’s not surprising that we respond to the ‘other’, the strange, the foreign by pushing it away. Separation and escape from difference is so much easier, so much more natural, for us now than engagement and mutual commitment.

‘Don’t talk to strangers’, parents used to tell their children. Today that advice is redundant. Who does that anymore? ‘Civil spaces’—spaces where we met strangers and did some mutual thing together—are shrinking.

Public spaces—movie theatres, shopping streets, restaurants, airports—are proliferating. But such spaces ‘encourage action, not inter-action.’ In public spaces, genuine encounters with strangers are an annoyance; they keep us away from the actions in which we are individually engaged. However crowded these spaces may be, there is nothing ‘collective’ going on among the crowd. These crowds are accurately called gatherings, but not congregations; clusters, not squads; aggregates, not wholes.

Because civil spaces are shrinking, ‘the occasions to learn the art of civility are ever fewer and further between.’ And civility—the ability to live with differences, let alone to enjoy such living and to benefit from it— is an art. ‘It does not come easily. Like all arts, it requires study and exercise.’

If we lack the art of civility, ‘seeking security in a common identity rather than in an agreement on shared interests emerges as the most sensible way to proceed, because no one knows how to talk to anyone else.’

Patriotism and nationalism are the easiest ways to construct a shared sense of safety. But given the messy, tangled reality of humanity today, they’re also the least stable. ‘In a stark opposition to either the patriotic or the nationalistic faith, the most promising kind of unity is one which is achieved, and achieved daily anew, by confrontation, debate, negotiation and compromise between values, preferences and chosen ways of life and self-identifications of many and different people. This is a unity that is an outcome of, not a prior condition to, shared life.

‘This, I wish to propose, is the only formula of togetherness which our liquid modernity renders plausible…And so the choice stares us in the face: to learn the difficult art of living with difference.’

This line of thinking led Zygmunt to conclude (in 2012, four years before Brexit and Trump): ‘The big question, likely to determine the future of civilization, is which of these two contending “facts of the matter” will come out on top: the life-saving role played by immigrants in slow-growing, fast-ageing countries, or the rise in xenophobic sentiments, which populists will eagerly recycle into electoral power?’


Connecting the dots

All of the above is just one person’s way of making sense of the changes we’re all going through. But it’s remarkable how similar his sense-making is to others’ attempts. In language that reminds me strongly of Marshall McLuhan, who described living in ‘a state of terror’, Zygmunt writes: ‘Living under liquid modern conditions can be compared to walking in a minefield: everyone knows an explosion might happen at any moment and in any place, but no one knows when the moment will come and where the place will be.’

Under conditions of ‘liquidity’, everything can happen—yet nothing can be done with confidence and certainty. That’s because ‘we presently find ourselves in a time of “interregnum”—when the old ways of doing things no longer work, the old learned or inherited modes of life are no longer suitable for the current human condition, but when the new ways of tackling the challenges and new modes of life better suited to the new conditions have not as yet been invented.’

But we’re working on it.


More from Zygmunt Bauman

Two of Zygmunt’s obituaries(January 2017), in The Guardian and Al Jazeera. The former is more informative. The latter is more personal.

Passion and Pessimism’ (2003) — a long essay-interview in The Guardian, in which Zygmunt confronts the accusation of being too pessimistic about the present and describes the ‘restless moral energy’ that made him an intellectual maverick his whole life.

Liquid Fear’ (2016) — one of Zygmunt’s last video-interviews, given just a few months before Trump’s 2016 election victory. He talks (in a thick accent!) about ‘how we live today in a state of constant anxiety about the dangers that could strike unannounced at any moment’ and how to cope as passengers in an airplane with no pilot.

Social Media are a Trap’ (2016) — an interview Zygmunt gave with the Spanish newspaper El Pais. Regarding social networks, he points out: ‘The difference between a community and a network is that you belong to a community, but a network belongs to you. You feel in control. You can add friends if you wish, you can delete them if you wish. You are in control of the important people to whom you relate. People feel a little better as a result, because loneliness is the great fear in our individualist age. But it’s so easy to add or remove friends on the network that people fail to learn the real social skills that you need when you go to the street, when you go to your workplace, where you find lots of people with whom you need to enter into sensible interaction.’

Marshall McLuhan Decodes Our Present

I’ve been reading a lot of Marshall McLuhan lately. McLuhan was the Canadian public intellectual of the 20th century who coined the phrase “global village” and “the medium is the message”. His 1962 book The Gutenberg Galaxy was one of the first to draw connections between the advent of print in Europe in the 1450s and the advent of computers. But whereas many thinkers considered the scientific, economic and political parallels, McLuhan focussed on the social and cultural parallels. I’m finding that a lot of his ideas, written at the birth of computing, help make sense of our digital transformation now.

I shared some McLuhan-inspired thoughts this past week on CBC Radio’s The Current with Anna Maria Tremonti. I thought I’d pull out a few brief excerpts to spark some reflection and conversation.

With appreciation,

Chris

 


1. The revolution has already happened.

ANNA MARIA: Since Donald Trump became president a year ago, we’ve heard a lot about fake news and foreign interference in elections, not solely in the U.S. What do you make of the way the world has reacted to those two threats and controversies?

CHRIS: 2016 was the year of shock—of Brexit and Trump’s election. 2017 was our year to gawk at those events and their aftermath. I’m hoping that 2018 is the year when we finally block out the noise and say, “All right. Let’s take a step back and decide how we’re going to navigate through all of this.”

I think a good way for all of us to start off 2018 would be to dust off Marshall McLuhan. Back in the ‘60s, he already foresaw a lot about how today’s social media would transform society. And one of the things that he warned us was, “When we get to this hand-wringing stage, the revolution has already happened.”

ANNA MARIA: So in fact the revolution has happened. And we now need to adapt.

CHRIS: That’s right. Take foreign interference in democratic discourse, for example. We need to be sober about asking ourselves, “Can we really censor out foreign voices from our domestic discourse?” We live in a “free society”, and traditionally we mean by that “open” and “unrestrained”. But in 2018 it also means “vulnerable”. Social media helps make this vulnerability visible, because it makes public discourse explicit and traceable. But many other dimensions of foreign interference in our public discourse are not so easy to see. That’s the broader reality that our social media experience helps us to recognize.


2. We are the oxygen feeding the bonfires we can’t look away from.

ANNA MARIA: If there is an historical precedent for us—for regular folk in 2018—to let social media run amok, then doesn’t that same historical precedent suggest that those in control don’t want it to?

CHRIS: There is a widespread fear that social media is going to upturn the existing order. But actually, I don’t think that we’ve begun to realize the power that social media has given to everyone.

I only have to look at President Donald Trump, at the stranglehold that he has on our attention, to see that we don’t yet understand our full power. Again, this is something that Marshall McLuhan made clear 50 years ago. He said that when we enter into the digital age, we’re going to regress to an oral culture. We’re going to regress to a society in which who says a thing and how big an audience hears the saying of it determines what’s true for us.

That’s exactly what’s happening. Our problem is that our habits of consuming information are still stuck in the old print culture. In print culture, we got into the habit of believing what we read and what we heard. But in an oral culture, we need to recover the capacity to ask ourselves “Who do I want to listen to?” and “Who do I want to ignore?”

And, again, it’s clear from the size of Trump’s global audience that we haven’t yet rediscovered how to do that. We’re stuck in this paradox of thinking—and this is very print-oriented way of looking at the world—that because Donald Trump is powerful, so we need to listen to him. But we’ve got cause-and-effect backwards. In today’s oral culture, the reality is that because we need to listen to him, so he became powerful.

We still have a long way to go toward recognizing our power to ignore—and, on the flipside, how we are the oxygen feeding the bonfires we can’t look away from.


3. In an oral culture, terror is the natural state.

ANNA MARIA: How does the shift from print culture to a digital culture change the way we assess the value of new ideas?

CHRIS: You’ve put your finger on one of the most stressful aspects of living at the birth of this digital age, which is: “How do we select which are the good ideas?” In a print world, publishers had a certain authority to select and curate what was—to borrow from the masthead of the New York Times—“fit for print”. And it wasn’t just print. With the advent of radio, of television, there has always been a curator of content. Suddenly we live in a digital world where everyone has the power to reach the many. I’m going to borrow Marshall McLuhan one last time because…

ANNA MARIA: I was just going to ask you to go back to him. So go ahead.

CHRIS: (Laughs) He was so prescient about this, because he said “the natural state of people in an oral culture is terror”. Which is how a lot of us feel today. Why? Because it seem like anything can affect anything at any time. And it’s hard to ever feel secure in that situation.

ANNA MARIA: Well let me pick up on that, then, because you talk about how in a digital culture where everyone has authority, no one has authority—and everyone has responsibility. So part of our adaptation has to be developing new ways of critical-thinking.

CHRIS: Right. But it’s not just us on our own.

ANNA MARIA: Collectively on our own (Laughs).

CHRIS: (Laughs) Sometimes it feels that way. But other actors in society have a role to play, too. When I talk about “letting social media run amok”, I don’t necessarily mean that we need to let these companies—the Facebooks and Twitters of the world—run amok.

The big debate right now is: do we leave them laissez-faire or do we regulate them as publishers? I think that’s probably the wrong debate to be having. I don’t think that these entities belong in the same category of regulation as publishers. They’re more like utilities—a form of national infrastructure akin to an energy utility or a telephone utility. And what’s the public responsibility of utility companies? It’s not to police their content. It’s to report usage and usage patterns, so that government, academics, police and society at large can better research, understand and regulate the systemic risks that are inherent to any important infrastructure.


More Marshall McLuhan

CBC Life & Times – Marshall McLuhan (YouTube, 1999) — Best video documentary I found on the web. Part I is just 10:00 long.

The Man Who Predicted The Internet Had A Stark Warning For How It Might Be Used (Independent.co.uk, July 21 2017) — Best of many short news articles published on the anniversary of his birth last year.

Marshall McLuhan Speaks (website) — The best digital archive of audio and video from three decades of McLuhan’s public lectures. If you’ve got one minute to burn, watch the first minute of this Introduction by Tom Wolfe (author of The Bonfire of the Vanities)

The radical remaking of economics

“The real voyage of discovery consists, not in seeking new landscapes, but in having new eyes.” (Marcel Proust)

I’ve been reflecting on these words, which Marcel Proust wrote in his 1923 work, Remembrance of Things Past. They summarize, better than I could, the journey of adaptation I think we all face today.

As 2017 draws to a close, I am deliberately shifting gears, from writing to researching. To borrow Proust’s language, my question for 2018 is simply What are the new eyes we need? How does our thinking need to change to keep pace with a changing world? I’m pulling together the best answers I can find, from the bravest thinkers I can reach—and I’ll share what I think are the best insights as I find them. My hope is that, by the end of 2018, we’ll have travelled the length and breadth of the New World together through these letters.

The above quote by Proust reminds me of another, by Tom Stoppard: “It’s the best possible time to be alive, when almost everything you thought you knew is wrong.”

With appreciation,

Chris


Rethinking Economics

I’ve just finished re-reading The Origin of Wealth, by Eric Beinhocker (Eric is a colleague of mine from Oxford). His book’s subtitle—“The Radical Remaking of Economics”—gives you a strong hint as to why I’m re-reading it now. Whether or not we’ve ever studied economics, mainstream economic theory has installed into our brains some of our most basic, unconscious biases about “how the world works” and “what the future will look like”.

For example, one of the unconscious biases we’ve somehow picked up is that things will “return to normal” in the future. If the present is a period of disruption—of “disequilibrium”—then our expectation of the future is a return to equilibrium. Many of us can feel that bias operating when we think (dream?) about a post-Trump America (surely it’ll reclaim its moral high ground) or when we read stories about the soaring pricing of Bitcoin (surely it’ll crash back down soon).

Eric has traced the origin of these biases from mainstream economics, and he’s come to the conclusion that they mislead us more than they guide our understanding of how society really works.

The root problem, Eric finds, is that mainstream economics is built atop the wrong metaphor. That mistaken metaphor is physics. The story goes that in the 17th and 18th centuries, now-famous scientists—Leibniz, Lagrange, Euler, Hamilton and others—developed new mathematical language (calculus) that proved able to explain and predict a staggering range of natural phenomena in precise equations. Age-old problems, from planetary motion to the vibration of violin strings, were suddenly mastered. Those successes gave scientists a boundless optimism: perhaps anything in nature could be explained with equations.

The father of modern economics is perhaps Adam Smith (1723-1790), but Smith was a social philosopher. He wrote essays, not equations. In the century after Smith, social philosophers began to think that the same mathematic techniques being used so successfully by physicists to model the motion of the natural world could be applied to model the motion of humans in the economy, too. And the field of economics set out to do just that.

The rest, as they say, is history (and if you want to trace that history fully, read Part I of Eric’s book). Today we have deep notions about how the economies we live in will behave—that supply will match demand, that unstable markets will find a stable equilibrium point (i.e., price), that what goes up must come down—because that’s how the physical universe works, and the physical universe was our starting metaphor for how the economy works, too.

Radical rethinking

The fundamental problem, Eric argues, is that economic activity doesn’t resemble the laws of physics; it resembles the laws of biology—and specifically, evolution.

With all the rigor that a 500-page book allows, Eric marshals powerful arguments for why evolution is a much better metaphor to guide our basic understanding of economic life. I’ll jump straight to the “so what?” Assuming Eric is right, how should our way of seeing the world we live in change? Here are the three most relevant shifts I see, for the world of December 2017:

Forget a return to “stable equilibrium”. The only economic constant is uncertainty. In physics, when you disturb a closed physical system, it will reach a certain final equilibrium state. Set a marble spinning around the inside of a bowl, and eventually it will settle on the bottom. But biological populations have far less predictable futures. They can grow; they can collapse. They can experience periods of stable equilibrium and periods of exponential change. And their futures are path-dependent. Every episode—good and bad—leaves genetic and behavioral marks that (a) change what the next steps in evolution can and cannot be and (b) make return to a preferred, previous state impossible.

Human progress is not guaranteed. One common, unconscious bias in the modern world is that humanity’s material condition will progress inevitably over time. That’s a comforting notion, but we don’t live in a clockwork universe. In evolution, the one constant truth is that no species or group can rest comfortably upon its achievements. Eventually, the environment changes. When it does, the criteria for prosperity also change. The group’s future prosperity then depends entirely upon how well and quickly it adapts—whether by luck, planning or both—to those new criteria.

Cooperation—not competition—is the most reliable path to economic prosperity. In an economy whose metaphor is physics, we each decide how to act in any given moment according to over-arching rational laws like self-interest and private utility-maximization. But in an economy whose metaphor is evolution, we develop behavioral norms and rules-of-thumb from the bottom-up, according to whatever helps the population to prosper.

And the oldest, most helpful norm we’ve evolved is not “survival of the fittest”, but cooperation. A group of Stone Age humans hunting together generated much more food, per calorie spent hunting, than a lone individual could.

Our present-day economic universe is full of such “non-zero-sum games”, where everybody is made individually better off if we can somehow work together. Much of humanity’s social evolution as a species up to now, Eric argues, has been about finding clever tricks that make such cooperative games easier to discovery and enter into with people we don’t know—tricks like legal systems (to overcome trust problems), money (to overcome problems of how to measure and share the cooperative surplus) and democracy (to overcome disagreements over rules and leadership).

The cleverest trick of all is culture. Cooperation is not blind altruism. Altruistic populations can be killed off if too many free-riders enter the game. Free-riders weaken—and can sometimes flip off—the group’s cooperative behavior. Hence, we humans have made our cooperation conditional. We’ve developed strong cultural norms of “fairness” to punish would-be free-riding behavior heavily. We’ll punish it, even if doing so would irrationally harm our self-interest. (The classic example is this: A stranger offers you and your friend $5,000, with two conditions: your friend gets to choose how to divide the money, and you have to agree with him. Say your friend chooses $4990 for himself, and offers you only $10. If your gut tells you to say “Yes”, you’re a genuine Homo Economicus, ruled by rational self-interest. If your gut say “No”, that’s evolution at work, shaping the conditions of your cooperative behavior.)

Why does this matter right now?

In a world where “economic disruption” has become de rigueur, we can turn to biology and evolution for some rigorous answers to the question, “How might we adapt?” For me, the key take-aways are:

For us individually: Having an “adaptive mindset” (another fashionable phrase) is not about being a visionary risk-taker. It’s about pragmatically placing many small bets, then betting big on what works. That’s how nature does it.

For us as society: We cannot predict the direction of economic evolution, but we can design our institutions and societies to be better or worse evolvers.

And that, I think, is the most urgent message for right now. Looking at society through the lens of evolutionary science, it’s clear: we need to constantly tend and renew the system conditions that cause cooperative play to flourish. Otherwise, there’s no guarantee of a one-way ticket toward greater social order and wealth. Any society can experience collapse and dead ends. Any society can cross the tipping point where cooperative games stop. And that includes ours.

“Fake News”, “Foreign Interference” and the Future of Democracy

One-Year Anniversary of Trump

One year of poking and prodding over Donald Trump’s election later, we’ve diagnosed “fake news” and “foreign interference” as dual, possibly life-threatening infections to the whole system of liberal democracy—and we’ve seized upon social media as the vector by which these ills are spread. A majority of us are drawn into it for at least an hour every day. A majority of us get our news out of it. And so “social media”—a phrase that meant nothing barely a decade ago—today is the phrase without which nothing can be explained.

The unaccountable lawlessness that romps through this new medium threatens to drown out the civil discourse upon which democracy depends. So argue many guardians of our present moral and political order, who look poised to tame the most anarchic content through laws or regulations. Last week, executives from Facebook, Google and Twitter were hauled before the U.S. Congress. The message, to paraphrase Senator Dianne Feinstein, was blunt: ‘The platforms you’ve created are being misused. Do something about it. Or we will.’ The standard Mark Zuckerberg defense—that a social media platform should bear no more responsibility for the diseases that travel across it than, say, the Heathrow airport authority bears when travelers carry this season’s flu into or out of London—now sounds naïve.

From Chaos, A New Age?

The reason is we now fear a pandemic. Not so long ago, the European Union was inseparable, Trump was unelectable, globalization was irreversible, science was incontrovertible and even the democratization of China was inevitable. In the wake of so many sudden reversals to society’s settled projects and norms, the knee-jerk impulse to legislate and protect against further downsides from social media is predictable—even sensible.

It might also be dead wrong. History suggests the wiser course may be to let social media run amok. This new medium will work giant, unintended consequences upon society—regardless of our feeble attempts to control it. The faster we get to those consequences, the better off we’ll be.

That is the broadest lesson from the social transformations and botched control measures during the Renaissance advent of print—history’s best analogue to the test that social media presents.

Pre-print, the only “one-to-many” communications medium was oral. And so who said a thing, and how big an audience bore witness to the saying of it, mattered most. In this oral culture, thrones and pulpits held sway over public discourse and ideas. Then Gutenberg’s printing press—which emerged in Mainz in the 1450s and by 1500 had spread across the continent—made the pulpit publicly available. Unordained voices were amplified, and what was said began to have a power all its own.

This transition to print culture helped shift Europe out of the medieval and into the modern era—in part because the print medium was messy and subversive. An obscure Polish astronomer, toiling away in Warmia, shifted Europe’s understanding of reality from an earth-centered to a sun-centered universe. Copernicus accomplished that feat, not through the singular eloquence or blinding reason of his 1543 book, but also because decades worth of almanacs, bestiaries, travelogues, maps of new worlds and detailed diagrams of human anatomy preceded him. Many were lies and superstition. All undermined the medieval conviction that knowledge about the world lay only in ancient wisdom. Similarly, Martin Luther managed to ignite his Protestant Reformation in 1517, not only because his 95 Theses On Indulgences spread faster than the Catholic Church could tear them up, but also because his theses landed on a print culture that was ready to doubt the Church’s monopoly on truth. (How could the Church claim to be infallible, when every printed Bible since Gutenberg’s had textual differences?)

At its advent, authorities welcomed print as an instrument of humanity’s ascendance. In 1470, the Vatican’s own librarian reflected that ‘One can hardly report inventions of like importance for mankind, whether in ancient or modern times.’ But as the unwanted consequences mounted—the spreading of lies, the fanning of zealotry and bigotry, the deepening of linguistic and national divides—that official welcome was replaced by attempts to halt the corrosion of society’s moral and political order. They all backfired.

In the extreme case, the Islamic Ottoman Empire banned the printing press early and outright, in the 1480s—and missed out on the Scientific Revolution. Within Europe, Catholic countries began to censor Copernicus, Galileo and other scholars whose discoveries undermined Church dogma—and the weight of ‘scientific’ printing shifted toward Protestant countries where publishers faced fewer risks.

And everywhere, the growing insistence of church or state to judge which content was fit for print inspired new philosophies that ultimately reinvented both. By the mid-1600s, resistance to state control by printers and authors had established a new political and artistic principle in society—the “freedom of the press”. The glaring contrast between the absolute truths claimed by kings and popes, and the accumulating evidence for alternatives, created the public space in which philosophers like Thomas Hobbes, John Locke and Immanuel Kant challenged the very nature of monarchy and religion. Those investigations, widely published and debated throughout Europe, established concepts like free markets and constitutional democracy, and shaped new attitudes of skepticism and secularism.

Everything That Breaks…Needed Fixing

The print medium made “one-to-many” communications common. Social media makes “many-to-many” society’s new norm. Given the history of the former, who dares bet against the arrival of unintended, unforeseeable transformations powered by the latter?

These consequences are indeed coming. Let’s get to them as quickly as possible. Rather than repeat the mistake of medieval church and state, by pouring energies into a fraught, futile effort to control this new medium, let’s focus our attention instead upon fixing the flaws in our society as it reveals—or causes—them.

Social media has already produced a couple big ones.

First, it’s revealed that we’re not honest enough about the gap between our public politics and our private beliefs. Did Russian Facebook posts win Trump his presidency? No. As Tuesday’s off-year elections demonstrated, they don’t determine down-ballot races, either. No matter how expertly Russian bots nudged Americans’ voting behavior, their algorithms produced a tiny fraction of the 10 billion political likes, shares and comments that Facebook logged during the 2016 election cycle. But skim through the messages generated by those algorithms: they are a cold, clinical diagnosis of the views Americans harbor in the privacy of the voting booth. We should all invite these tough insights, courtesy of the Russian taxpayers, because democratic discourse is a polite sham unless it begins with that honesty.

Second, social media is triggering a regress to oral culture. In print culture, the relative scarcity of access to a press gave each published title a certain gravitas. And over 500 years, we refined skills of critical inquiry to separate the great from the garbage. Now everyone has access to a press, all words are published words, and the skills we had refined to sort them are obsolete. We find ourselves swayed mainly by who says the words—again. That is why, in the US for example, names like Oprah Winfrey and Tom Hanks are now freely floated as presidential contenders—not because they have demonstrated interest in public service, but because of our demonstrated interest in them.

Social media may offer a new means for foreign interference or false idolatry to harm our democracy, but these flaws offer the opportunity. To try to censor or shield ourselves from their consequences is dangerous. We risk deluding ourselves into thinking our society is safe again, when instead we’re ducking the real challenges we face. We must depend instead upon our ability to sort through them.

The upside of letting social media run free

In doing so, we should summon the courage not just to tweak, but reinvent liberal democracy.

At the advent of print, a moral and political order that felt finished was, in hindsight, medieval. It gave way to modernity.

It’s a safe bet that 2017 isn’t the endpoint of history either, and that society will be ordered quite differently 500 years from now. With our physical technologies, we are already embarking on voyages of discovery to disrupt the present way of things: in artificial intelligence, autonomous robotics, genetic modification, quantum computing and additive manufacturing. But our social technologies—our legal, tax and education systems, asset and labour markets, public values and private ethics—are evolving slowly in response. Too slowly, given that society’s prevailing trend is to grow apart, not together.

If the history of the adoption of print is any guide, our new many-to-many medium will be the catalyst we need to accelerate social evolution. The printing press was a force for secularism and skepticism even as it enabled a century of religious war. Social media is a force for inclusion even as it fosters divisions. To take the most recent example: since mid-October, when allegations of sexual harassment against Hollywood mogul Harvey Weinstein were published, the #metoo hashtag has been published over 80 million times, victims of harassment have found a supportive social space in which grievances are taken seriously, and workplace norms are suddenly, markedly shifting. In China—a place where social media has zero power to seize the agenda—no national conversation on gender equity or sexual harassment is taking place right now. The Communist Party of China, for all its proud rhetoric of meritocracy, unveiled a new leadership team of precisely seven men on October 25. By contrast, in every advanced democracy, an unscheduled reality check is taking place. That bodes well for our future. In an aging world, prosperity depends upon bringing every marginalized talent to bear.

Let social media run amok. Everything it breaks, needed fixing. Everything else needs a renaissance.

Let Social Media Run Amok

Last week, executives from Facebook, Google and Twitter appeared together before the U.S. Congress. Senators grilled them about Russia’s ongoing use of their social media platforms to influence U.S. public opinion—and, last year, to nudge the outcome of the 2016 presidential election. Prior to the hearing, Facebook admitted that over 140 million Americans were exposed to Russia-sponsored content during the presidential campaign season.

The advent of social media, and the speed with which it has displaced other media as citizens’ go-to source for news and opinion, forces every democratic society around the world to ask itself two tough questions:

  • How do we maintain the free and open discourse upon which democracy depends, without it being drowned in unaccountable lawlessness (“fake news”) or divided into tribes that don’t respect each other?
  • How do we maintain the sovereignty of our own citizens, when our domestic discourse easily admits foreign interference?

Artificial intelligence, whose capabilities are strengthening daily, makes these questions doubly urgent. With growing success, any well-financed interest group can develop algorithms to figure out which citizens to target with which messages in order to influence their political choice-making.

What can, or should, democracies do to address fake news and foreign interference on social media platforms? Should governments try to regulate away these threats to democratic discourse, by treating these platforms as media companies? (Under a new law in Germany, social media companies can be fined up to €50 million if they fail to take down illegal hate speech from their sites within 24 hours of being notified. The U.S. is contemplating new laws that would require big social media platforms to track and disclose political ad-buyers—just as TV and radio must do already.)

Or should we leave the Facebooks and Twitters of the world free to self-regulate—effectively, accepting their argument that they are not media companies, but rather neutral technology platforms with little responsibility for the content they host? Facebook, in an effort to show governments that self-regulation can work, has begun to pay fact-checkers to weed out some of the fake news from among the most popular stories on its platforms. But in a world of self-regulating platforms, the ultimate responsibility for sorting the wheat from the chaff would remain with citizen-users. (This week, Italy began rolling out a “fake news awareness” class across 8,000 high schools to help equip the country’s next generation of adults with the skills to do just that.)

Fake news and foreign influence put democracy itself at risk. With stakes so high, it’s no wonder that governments want to step in to limit the downside risks of social media.

The power of the unintended

But history suggests this thinking may be backwards. The unintended upside of letting social media run wild may far exceed the unintended downside.

That is the lesson from the advent of print—which, as I’ve argued before, is history’s best analogue to the advent of the internet and social media. The former was history’s first true “one-to-many” communications medium. The latter is history’s first true “many-to-many” medium.

The arrival of a “one-to-many” era of communications was dominated by unintended, unforeseeable consequences. Print made it possible for one man, Martin Luther, to ignite a Protestant Reformation that broke the Catholic Church’s monopoly on spiritual truth in 16th-century Europe. The printing press accomplished that feat, not only by spreading Luther’s famous 95 Theses far and wide, but by spreading different versions of the Bible, which eroded public belief in the existence of a single infallible text.

Print made it possible for another man, Nicolaus Copernicus, to shift Europe’s understanding of reality from an earth-centered to a sun-centered universe. Print accomplished that feat, not simply by publishing widely his book, On the Revolutions of the Heavenly Spheres. For decades leading up to Copernicus’ publication, print changed the cadence of knowledge creation and knowledge sharing in Europe, so that, more and more, people began to trust in the accumulation of new discoveries more than the wisdom handed down from ancients.

The adoption of this new, print medium was a chaotic experience, full of contradictions. Print did as much to perpetuate blatant errors as it did to spread enlightened truth. It fanned religious extremism and bigotry while fostering a new concern for the poor. It deepened linguistic and national divides, but also created a more cosmopolitan “commonwealth of learning” among artists and scholars. The same technology that helped begin a hundred years of religious war across the continent also helped fuel individuality and usher in the Scientific Revolution.

Illusory control

Many states tried to control print’s impact upon society’s moral and political order. That control came at a price. On the extreme, the Islamic Ottoman Empire banned the printing press outright in 1485, just a few decades after print’s emergence in Europe—and maintained the ban until the 19th century. The Islamic world had been the global seat of scientific progress from 750 to 1100 AD, but lost that seat to Europe’s Scientific Revolution.

Within Europe, censorship regimes tried to suppress “dangerous” aspects of the print medium. Catholic countries censored Copernicus, Galileo and other scientists whose discoveries ran against dogma heavily. In 1543, the Catholic Church decreed that no book could be printed or sold without its permission. In 1563, Charles IX of France decreed that nothing could be printed without the king’s permission, either. Inquisitions, imprisonments and public burnings enforced those decrees through fear.

But ultimately, they were self-defeating. Catholic bans pushed scientific printing toward Protestant countries where the risks of publication were smaller. Resistance to state control, by printers and authors, helped establish the “freedom of the press” as a new political and artistic principle by the mid-1600s. And the contrast between the stubborn efforts of Church and state to monopolize truth and the emerging evidence of alternative truths created a space for philosophers such as Thomas Hobbes, John Locke and Immanuel Kant to question the nature of religion and absolute monarchy. Those investigations, widely published and debated throughout Europe, introduced concepts of free-market capitalism and democracy and shaped new attitudes of skepticism, secularism and individualism. The American and French Revolutions followed not long after.

The history of print offers three principles to inform present debates over the regulation of our new, many-to-many medium. First, attempts to control society’s use of social media may prove self-defeating, by inspiring new philosophies that undermine existing institutions. Second, the unintended consequences of social media will likely dominate any results we may intend to bring about. The greatest gains will go to those societies that explore the upside of the medium, not those that protect against the downside. And finally, the adoption of this new medium is a long term enterprise. If unintended consequences are going to dominate our future, it would be a good idea to get to those consequences as quickly as possible—and hasten our adaptation to them.