Map #31: A.I. — Humanity’s Lab Rat Or Robot Overlord?

Map #31: A.I. — Humanity’s Lab Rat Or Robot Overlord?

’I have always believed that any scientific concept can be demonstrated to people with no specialist knowledge or scientific education.’ Richard Feynman, Nobel physicist (1918-1988)

I feel like the whole field of AI research could take a cue from Richard Feynman. Why is something that computer scientists, businesses, academics and politicians all hype as “more important to humanity than the invention of fire” also so poorly explained to everyone? How can broader society shape the implications of this new Prometheus if no one bothers to frame the stakes in non-specialist terms?

This is a project that I’m working on, and I’d love to hear your thoughts on my initial attempts. For starters, in this letter I want to map out, in non-specialist language, where AI research has been and where it is today. 

The Temple of Geekdom

I had coffee recently with the head of AI for Samsung in Canada, Darin Graham. (He’s heading up one of the five new AI hubs that Samsung is opening globally; the others are in Silicon Valley, Moscow, Seoul and London.)

Hanging out with a bona fide guru of computer science clarified a lot of things for me (see this letter). But Darin demanded a steep price for his wisdom: he made me promise to learn a programming language this year, so that the next time we talk I can better fake an understanding of the stuff he’s working on. 

Given my chronic inability to say “No” to people, I now find myself chipping away at a certification in a language called Python (via DataCamp.com, which another guru-friend recommended and which is excellent). Python is very popular right now among data scientists and AI developers. The basics are quick and easy to learn. But what really sold me on it was the fact that its inventor named it after Monty Python’s Flying Circus. (The alternatives—C, Java, Fortran—all sound like they were invented by people who took themselves far too seriously.) 

To keep me company on my pilgrimage to the temple of geekdom, I also picked up a couple of the latest-and-greatest textbooks on AI programming: Fundamentals of Deep Learning (2017) and Deep Learning (2018). Again, they clarified a lot of things.

(Reading textbooks, by the way, is one of the big secrets to rapid learning. Popular books are written in order to sell copies. But textbooks are written to teach practitioners. So if your objective is to learn a new topic, always start with a good textbook or two. Reading the introduction gives you a better grounding in the topic than reading a hundred news articles, and the remaining chapters take you on a logical tour through (a) what people who actually work in the field think they are doing and (b) how they do it.) 

Stage One: Telling Computers How (A.I. As A Cookbook)

Traditional computer programming involves telling the computer how to do something. First do this. Now do that. Humans give explicit instructions; the computer executes them. We write the recipe; the computer cooks it. 

Some recipes we give to the computer are conditional: If THIS happens, then do THAT. Back in the 1980s and early 1990s, some sectors of the economy witnessed an AI hype-cycle very similar to the one we’re going through today. Computer scientists suggested that if we added enough if…then decision rules into the recipe, computers would be better than mere cooks; they’d be chefs. Or, in marketing lingo: “expert systems.” After all, how do experts do what they do? The answer (it was thought) was simply: (a) take in information, then (b) apply decision rules in order to c) reach a decision. 

It seemed a good job for computers to take over. Computers can ingest a lot more information, a lot faster, than any human can. If we can tell them all the necessary decision rules (if THIS…, then THAT), they’ll be able to make better decisions, faster, than any human expert. Plus, human expertise is scarce. It takes a long time to reproduce—years, sometimes decades, of formal study and practical experience. But machines can be mass manufactured, and their software (i.e., the cookbooks) can be copied in seconds.

Imagine the possibilities! Military and big business did just that, and they invested heavily into building these expert systems. How? By talking to experts and watching experts in order to codify the if THIS…, then THAT recipes they followed. A new age of abundant expertise lay just around the corner.

Or not. Most attempts at the cookbook approach to computerizing expertise failed to live up to the hype. The most valuable output from the “expert system” craze was a fuller understanding, and appreciation, for how experts make decisions. 

First, it turned out that expert decision rules are very hard to write down. The first half of any decision rule (if THIS…) assumes that we’ve seen THIS same situation before. But experts are rarely lucky enough to see the same situation twice. Similar situations? Often. But ‘the same’? Rarely. The value of their expertise lies in judging whether the differences they perceive are relevant—and if so, how to modify their decision (i.e., …then THAT) to the novelties of the now. 

Second, it turned out that there’s rarely a one-to-one relationship between THIS situation and THAT decision. In most domains of expertise, in most situations, there’s no single “right” answer. There are, instead, many “good” answers. (Give two Michelin-starred chefs the same basket of ingredients to cook a dish, and we’d probably enjoy either one.) We’ll probably never know which is the “best” answer, since “best” depends, not just on past experience, but on future consequences—and future choices—we can’t yet see. (And, of course, on who’s doing the judging.) That’s the human condition. Computers can’t change it for us.

Human expertise proved too rich, and reality proved too complex, to condense into a cookbook. 

But the whole venture wasn’t a complete failure. “Expert systems” were rebranded as “decision support systems”. They couldn’t replace human experts, but they could be valuable sous-chefs: by calling up similar cases at the click of a button; by generating a menu of reasonable options for an expert to choose from; by logging lessons learned for future reference. 

Stage Two: Training Computers What (From Cooks to Animal Trainers)

Many companies and research labs that had sprung up amidst the “expert system” craze went bust. But the strong survived, and continued their research into the 2000s. Meanwhile, three relentless technological trends transformed the environment in which they worked, year by year: computing power got faster and cheaper; digital connections reached into more places and things; and the production and storage of digital data grew exponentially.

This new environmental condition—abundant data, data storage, and processing power—inspired a new approach to AI research. (It wasn’t actually ‘new’; the concept dated back to at least the 1950s. But the computing technology available then—knobs and dials and vacuum tubes—made the approach impractical.) 

What if, instead of telling the computer exactly how to do something, you could simply train it on what to do, and let it figure out the how by itself? 

It’s the animal trainer’s approach to AI. Classic stimulus-response. (1) Supply an input. (2) Reward the outputs you want; punish the outputs you don’t. (3) Repeat. Eventually, through consistent feedback from its handler, the animal makes its own decision rule—one that it applies whenever it’s presented with similar inputs. The method is simple but powerful. I can train a dog to sit when it hears the sound, “Sit!”; or point when it sees a bird in the grass; or bark when it smells narcotics. I could never tell the dog how to smell narcotics, because I can’t smell them myself. But I don’t need to. All I need to do is give the dog clear signals, so that it infers the link between its behaviors and the rewards/punishments it receives. 

This “Machine Learning” approach has now been used to train systems that can perform all kinds of tricks: to classify an incoming email as “spam” or not; to recognize objects in photographs; to pick out those candidates most likely to succeed in Company X from a tall pile of applicants; or (here’s a robot example) to sort garbage into various piles of glass, plastic and metal. The strength of this approach—training, instead of telling—comes from generalization. Once I’ve trained a dog to detect narcotics, using some well-labelled training examples, it can apply that skill to a wide range of new situations in the real world. 

One big weakness of this approach—as any animal or machine trainer will tell you—is that training the behavior you want takes a lot of time and effort. Historically, machine trainers have spent months, years, and sometimes decades of their lives manually converting mountains of raw data into millions of clear labels that machines can learn from—labels like “This is a narcotic” and “This is also a narcotic”, or “This is a glass bottle” and “This is a plastic bottle.” 

Computer scientists call this training burden “technical debt.” 

I like the term. It’s intuitive. You’d like to buy that mansion, but even if you had enough money for the down-payment, the service charges on the mortgage would cripple you. Researchers and companies look at many Machine Learning projects in much the same light. Machine Learning models look pretty. They promise a whole new world of automation. But you have to be either rich or stupid to saddle yourself with the burden of building and maintaining one.

Another big weakness of the approach is that, to train the machine (or the animal), you need to know in advance the behavior that you want to reward. You can train a dog to run into the bushes and bring back “a bird.” But how would you train a dog to run into the bushes and bring back “something interesting”???

From Animal Trainers to Maze Architects

In 2006, Geoff Hinton (YouTube) and his AI research team at the University of Toronto published a seminal paper on something they called “Deep Belief Networks”. It helped spark a new subfield of Machine Learning called Deep Learning. 

If Machine Learning is the computer version of training animals, then Deep Learning is the computer version of sending lab rats through a maze. Getting an animal to display a desired behavior in response to a given stimulus is a big job for the trainer. Getting a rat to run a maze is a lot easier. Granted, designing and building the maze takes a lot of upfront effort. But once that’s done, the lab technician can go home. Just put a piece of cheese at one end and the rat at the other, and the rat trains itself, through trial-and-error, to find a way through. 

This “Deep Learning” approach has now been used to produce lab rats (i.e., algorithms) that can run all sorts of mazes. Clever lab technicians built a “maze” out of Van Gogh paintings, and after learning the maze the algorithm could transform any photograph into the style of Van Gogh. A Brooklyn team built a maze out of Shakespeare’s entire catalog of sonnets, and after learning that maze the algorithm could generate personalized poetry in the style of Shakespeare. The deeper the maze, the deeper the relationships that can be mimicked by the rat that runs through it. Google, Apple, Facebook and other tech giants are building very deep mazes out of our image, text, voice and video data. By running through them, the algorithms are learning to mimic the basic contours of human speech, language, vision and reasoning—in more and more cases, well enough that the algorithm can converse, write, see and judge on our behalf. (Did you all seen the Google Duplex demo last week?)

There are two immediate advantages to the Deep Learning approach—i.e., to unsupervised, trial-and-error rat running, versus supervised, stimulus-response dog training. The obvious one is that it demands less human supervision. The “technical debt” problem is reduced: instead of spending years manually labelling interesting features in the raw data for the machines to train on, the rat can find many interesting features on its own. 

The second big advantage is that the lab rat can learn to mimic more complicated, more efficient pathways than a dog trainer may even be aware exists. Even if I could, with a mix of rewards and punishments, train a dog to take the path that I see through the maze, Is it the best path? Is it the only path? What if I myself cannot see any path through the maze? What if I can navigate the maze, but I can’t explain, even to myself, the path I followed to do it? The maze called “human language” is the biggest example of this. As children, we just “pick up” language by being dropped in the middle of it. 

 

So THAT’S What They’re Doing

No one seems to have offered this “rat in a maze” analogy before. It seems a good one, and an obvious one. (I wonder what my closest AI researcher-friends think of it—Rob, I’m talking to you.) And it helps us to relate intuitively with the central challenge that Deep Learning researchers (i.e., maze architects) grapple with today:

Given a certain kind of data (say, pictures of me and my friends), and given the useful behavior we want the lab rat to mimic (say, classify the pictures according to who’s in them), what kind of maze should we build? 

Some design principles are emerging. If the images are full color, then the maze needs to have at least three levels (Red, Green, Blue), so that the rat learns to navigate color dimensions. But if the images are black-and-white, we can collapse those levels of the maze into one. 

Similarly, if we’re dealing with data that contains pretty straightforward relationships (say, Column A: historical data on people’s smoking habits and Column B: historical data on how old they were when they died), then a simple, flat maze will suffice to train a rat that can find the simple path from A to B. But if we want to explore complex data for complex relationships (say, Column A: all the online behaviors that Facebook has collected on me to-date and Column B: a list of all possible stories that Facebook could display on my Newsfeed today), then only a multi-level maze will yield a rat that can sniff out the stories in Column B that I’d click on. The relationship between A and B is multi-dimensional, so the maze must be multi-dimensional, too. Otherwise, it won’t contain the path. 

We can also relate to other challenges that frustrate today’s maze architects. Sometimes the rat gets stuck in a dead-end. When that happens, we either need to tweak the maze so that it doesn’t get stuck in the first place, or teleport the rat to some random location so that it learns a new part of the maze. Sometimes the rat gets tired and lazy. It finds a small crumb of cheese and happily sits down to eat it, not realizing that the maze contains a giant wheel of cheese—seven levels down. Other times, the rat finds a surprising path through the maze, but it’s not useful to us. For example, this is the rat that’s been trained to correctly identify any photograph taken by me. Remarkable! How on earth can it identify my style…of photography!?! Eventually, we realize that my camera has a distinctive scratch on the lens, which the human eye can’t see but which the rat, running through a pixel-perfect maze, finds every time. 

 

Next Steps

These are the analogies I’m using to think about different strands of “AI” at the moment. When the decision rules (the how) are clear and knowable, we’ve got cooks following recipe books. When the how isn’t clear, but what we want is clear, we’ve got animal trainers training machines to respond correctly to a given input. And when what we want isn’t clear or communicable, we’ve got maze architects who reward lab rats for finding paths to the cheese for us. 

In practice, the big AI developments underway use a blend of all three, at different stages of the problem they’re trying to solve.

The next question, for a future letter, is: Do these intuitive (and imperfect) analogies help us think more clearly about, and get more involved in, the big questions that this technology forces society to confront. 

Our experience with “expert systems” taught us to understand, and appreciate, how human experts make decisions more fully. Will our experience with “artificial intelligence” teach us to understand and appreciate human intelligence more fully?

Even the most hyped-up AI demonstration right now—Google Duplex—is, essentially, a lab rat that’s learned to run a maze (in this case, the maze of verbal language used to schedule appointments with other people). It can find and repeat paths, even very complicated ones, to success. Is that the same as human intelligence? It relies entirely upon past information to predict future success. Is that the same as human intelligence? It learns in a simplified, artificial representation of reality. Is that the same as human intelligence? It demands that any behavior be converted into an optimization problem, to be expressed in numerical values and solved by math equations. Is that the same as human intelligence? 

At least some of the answers to the above must be “No.” Our collective task, as the generations who will integrate these autonomous machines into human society, to do things and make decisions on our behalf, is to discover the significance of these differences. 

And to debate them in terms that invite non-specialists into the conversation.

 

Brave voyages,

Chris

Map #30: Violence, Conformity and a Toronto Van Attack

Map #30: Violence, Conformity and a Toronto Van Attack

I was going to write about something completely different this week. Then I saw the news headline that a van had plowed through a mile of sidewalk in Toronto, killing nearly a dozen people and injuring dozens of others. It suddenly felt wrong to continue with the work I had been doing, and it felt right to meditate on the meaning and the causes of what had just happened.

That profound sense of wrongness was itself worth thinking about. Here I am in London, England, staring out my window at another wet spring morning. When, last year, a van plowed into pedestrians on London Bridge, or when a car plowed through pedestrians outside the Houses of Parliament, or when a truck drove through a Berlin Christmas market, or through a crowd of tourists in Nice, I read the headlines, I expressed appropriate sympathy, and then I went on about my business.

In my letter a couple weeks ago, I shared a quote from Jonathan Sacks, who said that, “The universality of moral concern is not something we learn by being universal but by being particular.” It’s when violence shatters the lives of my fellow Canadians that I am deeply touched by the inhumanity of the act. I understand these words better, now.

To reach some deeper insights into what happened in Toronto, and into similar events in the past and yet to come, I picked up The Human Condition (1958), by Hannah Arendt. Hannah Arendt (1906-1975) was one of the biggest political thinkers of the 20th century. From her first major book in the 1950s onward, she tried to make sense of a lot of the same things that we are all trying to wrap our heads around today: the rise of authoritarian regimes, the consequences of automation, the degradation of our politics, and the tensions between what consumer society seems to offer us and what we actually need to be fulfilled by life.

I came away from her book with some helpful insights about three or four big topics in the public conversation right now.

I’ve only got space here to reflect on last week’s van attack. But the biggest takeaway for me from this book was that, whatever horror hits the day’s headlines, if we can, through these events, grow our understanding of the human condition, then we will go to sleep stronger than when we woke up in the morning.

And that, at least, is something positive.

Brave voyages, to us all,

Chris

 

Hannah’s Big Idea in 500 words

To grasp Hannah’s insights into our present-day headlines, we need to give her the courtesy of 500 words to tell us her big idea.

In a nutshell, her big idea is that all human doing falls into one of three buckets: labor, work and action. Now for most of us, these three words kinda mean the same thing; their definitions blur and overlap. But Hannah says, No, no, no, the differences between these words mean everything. The better we grasp their distinctions, the more clearly we will grasp the human condition—and, by extension, why everything is happening.

Work is like craftsmanship or art. We are at work when, like the sculptor, we start with an idea and turn it into reality. Work has a definite beginning and end. When the work is over, we have added something new to the world.

To Hannah’s mind, the Industrial Revolution largely destroyed work, destroyed craftsmanship, so that today it’s really only artists who experience the intrinsic value of having made something.

In the modern world, most of us don’t work anymore. Instead we labor. Labor has neither a beginning nor an end. It is an unending process of producing stuff that is consumed so that more stuff can be produced and consumed. Most labor works on only a piece of the whole, with only a faint grasp of that whole, and only a faint or no intrinsic satisfaction for having contributed to that whole. As laborers, we do not make things; we make a living. And as the cliché goes: when we’re old and grey and look back on our lives, we won’t remember the time we spent at the office. Why? Because that time was largely empty of intrinsic value; it was empty of stuff worth remembering. (Hannah could be a bit dark at times.)

Action is for Hannah, the highest mode of human doing. To act is to take an initiative. To begin. To set something in motion. Unlike work, which is the beginning of something, action is the beginning of someone. It is how we distinguish ourselves and make apparent the plurality of human beings. If our labor reveals what we are (lawyer, banker, programmer, baker), then our actions reveal who we are. Through our words and deeds, we reveal our unique identity and tell our unique story. Action has a beginning, but it has no end—not one that we can see, anyway, because its consequences continue to unfold endlessly. (In her glorification of speech and action, Hannah let’s slip her love affair with ancient Greece. More on that later.)

In short, for Hannah the whole human condition can be understood in the distinctions and conflicts that exist between labor, work and action. Through that lens, I think she would say the following about the Toronto van attack.

 

This Toronto man was reaching for immortality

Hannah, a Jew, quit Nazi Germany in 1933. She knew a lot about the horrors of violence, she studied violence, and she strove to understand it.

Hannah, I think, would have zeroed in on the driver’s desire to commit “suicide by cop,” and the consequence of his failure to do so. She wrote:

The essence of who somebody is can come into being only when life departs, leaving behind nothing but a story. Whoever consciously aims at being “essential,” at leaving behind a story and an identity which will win “immortal fame,” must not only risk his life but expressly choose (as, in Greek myth, Achilles did) a short life and premature death. 

Only a man who does not survive his one supreme act remains the indisputable master of his identity and possible greatness, because he withdraws into death from the possible consequences and continuation of what he began. 

But, because of the restraint shown by the arresting officer, the man was denied the privilege of writing his own ending. He remains alive to face the unfolding consequences of “his one supreme act.” Instead of summing up his whole life in that one single deed, his story will continue to unfold, piecemeal. With each fresh, unintended page, he will be less the author, and more the witness, to his own place in history. He sought to win immortal fame. Instead he will live, be locked away, and be forgotten.

Those who feel weak, get violent

What about this “incel” movement—this involuntary celibacy stuff—which seems to have inspired the man’s rampage? Hannah wrote:

The vehement yearning for violence is a natural reaction of those whom society has tried to cheat of their strength.

Hannah thought of strength as something that individuals possess. She distinguished strength from power, which is something that people possess—but only when they act together. In a contest between two individuals, strength (mental or physical) decides the winner. In a contest between two groups, power—not numbers—decides the winner. (That’s why history is full of examples of small but well-organized groups of people ruling over giant empires.)

But in a contest between individual strength and collective power, collective power always wins. We saw this truth in the aftermath of the Toronto attack: the public’s coming together in the wake of the driver’s rampage showed just how helpless he is, whatever weapon he might wield, to change the way things are.

We’ve also see this truth on a larger scale, Hannah argued, in “passive resistance” movements like that of Mahatma Gandhi.

Popular revolt against strong rulers can generate an almost irresistible power—even if it foregoes the use of violence in the face of vastly superior material forces….

To call this “passive resistance” is certainly ironic; it is one of the most powerful ways of action ever devised. It cannot be countered by fighting—the other side refuses to fight—but only by mass slaughter (i.e., violence). If the strong man chooses the latter, then even in victory he is defeated, cheated of his prize, since nobody can rule over dead men.

Hannah’s point is, individual strength is helpless against collective power. For some groups, that’s been the secret to liberation: liberation from the strongman, the tyrant, the dictator. For some individuals, that’s been the explanation for their imprisonment: imprisonment to social norms, to shifting values, to their own sense of impotence.

How do we build a society of healthy individuals?

“Individual strength is helpless against collective power.” If Hannah was right about that, then the big question we need to ask ourselves is: How do we build a society of healthy individuals—a society that doesn’t suffocate, but instead somehow celebrates, individual strength?

To be sure, “involuntary celibates” who rage against their own frustrations are maladjusted, and they need to bear their own guilt for that. But, Hannah would argue, explosions of violence in our midst also remind us that the power of the group to make us conform is awesome.

So how do we each assert our uniqueness within society? It’s a deadly serious question.

 

The Greek solution

For Hannah, who saw only three basic choices in front of each of us—labor, work or action—the only hope for modern man and woman to assert their individuality lay in the last: action. Labor is the activity that deepens our conformity, until even our basic biological choices of when to sleep and when to eat are set by the rhythm of the economic machine. And the industrial revolution destroyed whatever private satisfactions once existed in the craftsman’s “work”.

So we’re left with action. And, Hannah mused, we’ve got two choices: either we create a public arena for action to be put on display, or people will create it on their own. The Toronto van attacker did the latter.

The ancient Athenians did the former. Their public arena for action was the political arena, the polis. In our modern economic mind, we tend to think of the political arena as an unproductive space where “nothing gets done.” But to the ancient Athenians, it was the primary public space where individual achievements were asserted and remembered.

The polis was supposed to multiply the occasions to win “immortal fame,”—that is, to multiply the chances for everybody to distinguish himself, to show in deed and word who he was in his unique distinctness. 

The polis was a kind of “organized remembrance,” so that “a deed deserving fame would not be forgotten.” Unlike the products of human labor and work, the products of human action and speech are intangible. But through the polis, they became imperishable.

For ancient Athenians, the political arena squared the circle. It transmuted society’s awesome power to force conformity into a shared celebration of individual strength. Or, as the philosopher Democritus put it in the 400s BC:

“As long as the polis is there to inspire citizens to dare the extraordinary, all things are safe; if it perishes, everything is lost.” 

 

How do WE square the circle?

Unfortunately, when it comes to the healthy assertion of individuality today, we modern people have painted ourselves into a corner. And we keep painting it smaller.

Politics isn’t a viable place for us to immortalize deeds anymore, because (in a complete reversal of how the ancient Greeks saw it) politics produces nothing for consumption—and is therefore unproductive.

But our primary productive activity—labor—is, likewise, empty of distinguishing deeds that we might want to immortalize. (Again, that seems to be the truth we all admit on our deathbeds.)

Maybe one day soon, when automation frees us from the need to perform any labor at all, we will then use our abundant time to develop our individual excellences. Hannah had grave doubts about that. More likely, having already formed the habit of filling our spare time with consumption, we will, given more spare time, simply consume more—more food, more entertainment, more fuel for our appetites.

The danger is that such a society, dazzled by its own abundance, will no longer be able to recognize its own emptiness—the emptiness of a life which does not fix or realize anything which endures.

Even our “private lives”—where we should feel no need to assert our individuality against the group at all—are now saturated with the need to perform for others.

The conclusion, which would be really funny if it weren’t so serious, is that for all our runaway individualism, it may be that modern society suffers from a crisis of individuality.

 

Hold onto the dilemma

Hannah didn’t solve these problems for us. (I guess that’s why she called this book The Human Condition.) But she did frame two powerful questions that can help us respond meaningfully to senseless acts of violence:

  1. How can I take part in the power of people acting together?
  2. What is the arena in which I celebrate individual distinctiveness—mine, and others?

This dilemma isn’t going away. ’Squaring this circle’ between social conformity and individual strength is one of the big, underlying projects of our time.

 

Map #29: False Choices (‘Should Science Or Values Take Priority?’) 

Map #29: False Choices (‘Should Science Or Values Take Priority?’) 

I Hate False Choices

As many of you know, I’m actively exploring routes to re-engage with my Canadian…er…roots. One of those routes is a policy fellowship program. I recently went through the application process, which included responding to the following question:

 In policy making, science and evidence can clash with values and perspectives. What should take precedence and why?

False choices like this one are a special hate of mine. So I thought I’d share my brief rant with you all. (If you share my special hate, or hate my special rant, let’s chat!)

</begin rant>

The premise that “science and evidence” stand in opposition to “values and perspectives” is fatal to the project of liberal democracy.

The ultimate consequence of this premise is precisely the crisis that we now face in our politics today—namely, the emergence of new, competing truth machines to support value-based policy agendas that were consistently denied validity by the truth machine of “science and evidence.”

This error goes all the way back to the Enlightenment, when we separated science from values, elevated science to the status of higher truth and gave science a privileged, value-free position from which to survey the unenlightened.

That act of hubris planted the seed for the present-day rebellion by every value that is “held” (and therefore is real at some level) yet denied the status of reality.

So long as we frame this tension between science and values as a clash, as an either/or that must be decided in favor of one side or the other, this rebellion will spread until it enfeebles, not just policy-making, but the whole liberal democratic project.

If science and evidence ought to take precedence, then logic will ultimately lead us to the China model of “democratic dictatorship.” There, the people chose a government in 1949 (via popular revolution), and it has been running policy experiments and gathering evidence ever since. Some experiments have been successful, some spectacularly not-so, but the Party retains the authority to learn, to adapt and to lead the people, scientifically, toward a material utopia. By force, when necessary.

If, instead, values and perspectives ought to take precedence, then far from wringing our hands at the proliferation of “fake news” and “alternative truths,” we should celebrate it. Now every value, not just Enlightenment values, has a truth machine that spews out authority, on the basis of which groups that hold a particular value can assert righteousness. Excellent! But then we ought to strike the “liberal” from liberal democracy, since constraints upon the exercise of authority have no privileged ground to stand on.

The only way to avoid these inevitable destinations of the either/or premise is to reintegrate science and value—at the level of policy-making and public discourse. We need a New Enlightenment. That is the task which the irruption of post-truth politics now makes urgent. To accomplish it, the Enlightened must first wake up to the values and intuitions that underlie our questing for evidence. For example: a sense that harmony with nature or with strangers is good; a conviction that we ought to preference the welfare of future generations over our own; a feeling that security is a good in itself.

We, the Enlightened, must reverse the order in which we validate our values—from “Here is the evidence, and therefore this is what we should value” to “Here is what I value, and here is the evidence that helps explain why it’s good.”

</end rant>

Map #28: A Higher Loyalty?

Map #28: A Higher Loyalty?

I was in Washington, D.C. this past week—the talk-shop capital of the world. I attended a conference on the future of war and spoke at a conference on the future of energy. In between, I took in Mark Zuckerberg’s hearings on Capitol Hill—and even found time to binge on a season of West Wing. (In the 2000s, it was serious political drama. Now, it’s good comedy. What seemed scandalous for the White House fifteen years ago looks so cute today.)

Sucked Into D.C. Coffee Row

I’ve been trying to get my head out of politics for the last couple of weeks, but in D.C. that’s impossible. The first question everyone asks you is, “So, what do you do?” (Here, networking is a way of life.) Then there’s a mandatory 10-minute conversation about the last Trump-smacker: his latest Tweet, or the latest story to break on Politico or The Hill or NYT or WaPo. (This is a city that votes 90% Democrat.) Then they ask you your name.

Other than Zuck’s Facebook testimony, the biggest story on everyone’s lips in D.C. this past week was A Higher Loyalty, the forthcoming book by former FBI Director James Comey. Technically it’s an autobiography of Comey’s full career in law enforcement, but most people are only interested in the last chapter—his time with, and firing by, Donald Trump.

The title references the now infamous, intimate ‘loyalty dinner’ that Comey attended at the White House mere days after Trump’s inauguration. Trump allegedly asked his FBI Director to pledge his loyalty, and Comey, demurring, pledged ‘honesty’ instead.

A few months later, in May 2017, Trump fired Comey. That action prompted the Justice Department to appoint a Special Counsel to look into Trump’s Russia connections (if any), and here we still are, a year later, gobbling up every scrap of this story as fast as it emerges.

The release of Comey’s book this week marks another feeding frenzy. And while the talking heads on MSNBC and Fox News each push their particular narratives, the bigger question will be ignored completely: Is there something ‘higher’—higher to which all members of a society (even the democratically elected leader) owe loyalty? And if so, what is that thing?

This is a really good, really timely question.

The Constitution Isn’t High Enough

The obvious (but, I think, wrong) answer is ‘the constitution’. The U.S. constitution allows a two-thirds majority of the Senate to remove a president who has committed ‘Treason, Bribery, or other High Crimes and Misdemeanors.’ Democrats in this town dream that one day Special Counsel Robert Muller’s investigation will find a smoking gun under Trump’s pillow, leaving the Senate—and the American people—no choice but to evict, and convict, The Donald.

More likely, I think, Muller’s investigation will find ‘evidence of wrong-doing’—something in between ‘good’ and ‘evil’. And everyone will be just as divided as before—or, more likely, present divisions will worsen—because the process and the law leave ample room for judgment and interpretation. Was the investigative process ‘fair’? Can we ‘trust’ the process? And even if we do trust the process, does the ‘wrong-doing’ rise to the level of a ‘High Crime’—high enough to overturn the voters’ choice from 2016?

If there is to be something to which members of a society owe a ‘Higher Loyalty,’ it must be something above a country’s constitution. It must be that high place upon which we stand when the constitution is read.

Sociologists talk about trust. Economists talk about social capital. Biologists talk about the evolutionary advantages of cooperation. Political scientists talk about civil society. Lawyers talk about the distinction between ‘ethical’ and ‘legal’. Comey talks about a ‘higher loyalty’. They’re all investigations of the same idea: a healthy society depends upon more than its rules. It also depends upon a shared sense of why the rules matter.

But in a democracy, what’s higher than the constitution?

This Is Getting Biblical

One answer is: the covenant.

Constitutions define states; covenants define societies. Constitutions define the political, economic and legal system; covenants define the moral context in which these systems operate. Constitutions are contracts; covenants are the relationships of social life—relationships that, like ‘family’, cannot be reduced to legal language and market exchanges.

Among this Readership are heavyweights in political theory, constitutional law and sociology, and I sent out a little survey asking a few of you for good books that dig deeper into this idea of ‘covenant’. The #1 recommendation I got back was The Dignity Of Difference, by Jonathan Sacks. At first I was unsure—Jonathan Sacks is one of the most senior rabbis in the Jewish world, and I didn’t want to confuse his religious idea of covenant with the public idea of covenant. But it turns out that Jonathan spends a lot of time thinking about the latter.

In A Higher Loyalty, Comey talks about a looming constitutional crisis. Jonathan would say: Comey is mistaken. America doesn’t face a constitutional crisis; it faces a covenantal crisis. The latter is different, and deeper. It is a crisis over the question: What are the values that govern our society? 

The Best Things In Life Are Never Self-Evident

America’s covenant is not its constitution (signed in 1789), but its Declaration of Independence (signed in 1776). That earlier document famously begins:

We hold these truths to be self-evident: that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

The irony, of course, is that these truths are anything but self-evident. Throughout most societies in most of history, the social order has rested upon the idea that all people are not created equal. What the signatories of that Declaration really meant to say was, “Rather than rest upon the ideas of the past, we are going to build a new society upon the idea that each person (i.e., ‘white man’) is owed an equal measure of human dignity.” It took more than a decade of further debate to encode that social ideal into a state constitution.

The Declaration of Independence was a declaration of the moral objective toward which American society should strive. It’s echoed in the Preamble of the U.S. Constitution:

We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution…

These blocks of text thrum with shared moral meaning. They are among the best-known sentences in the English language. What has happened to throw this covenant into crisis?

And again, the obvious answer that many people give (‘Donald Trump’) is wrong.

For A Covenant To Hold, We Must Dignify Difference

I think Jonathan would say that the warring narratives between Fox News and MSNBC, between Trump advocates and Trump haters, are a reflection of what ultimately happens when we erode the boundary between politics and religion.

For a society’s covenant to remain strong and healthy, Jonathan argues, these two spheres of social life each need a separate space to play their respective roles. Religion (and by ‘religion’, Jonathan really mean all forms of deeply felt group association) is supposed to be the space in which we build identity, community and solidarity. Politics is supposed to be the space in which we work out the differences that inevitably develop between these groups.

We need both. We are social beings. We are meaning-seekers. Group association is an important part of how we become ourselves—and become good citizens. (‘The universality of moral concern is not something we learn by being universal but by being particular.’ – Jonathan Sacks)

But we also need politics. Precisely because so much of our meaning and identity arises from experiences within our particular group, our own meanings and identities will never be universally shared. Living together requires a layer of cooperation that straddles these differences.

Society starts to get ugly whenever these two spheres of social life (the space where we belong, and the space where we cooperate) collapse into one.

When religion is politicized, God takes over the system. When politics turns into a religion, the system turns into a God.

Either way, Jonathan explains, respect for difference collapses. When religion is politicized, outsiders (non-believers) are denied rights. The chosen people become the master-race. When politics turns into a religion, outsiders (non-comformers) are granted rights if and only if they conform (and thus cease to be an outsider). The truth of a single culture becomes the measure of humanity.

Progressives vs Reversers

These concepts (thank you, Jonathan) offer us a fresh way of thinking about what the heck is going on in U.S. politics at the moment.

Is it possible that Democrats have been guilty of turning politics into a religion that demands conformity? Yesterday a New York Times op-ed talked about how many Democrats have stopped talking about themselves as ‘liberals’ (because it now carries tainted connotations in U.S. discourse), and substituted the word ‘progressive’ instead.

The distinction matters. Says the op-ed writer, Greg Weiner:

‘Progressives’ are inherently hostile to moderation because progress is an unmitigated good. There cannot be too much of it. For ‘progressives’, compromise (which entails accepting less progress) is not merely inadvisable but irrational. The critic of progress is not merely wrong but a fool.

 

Because progress is an unadulterated good, it supersedes the rights of its opponents.

 

This is one reason progressives have alienated moderate voters who turned to Donald Trump in 2016. The ideology of progress tends to regard the traditions that have customarily bound communities and which mattered to Trump voters who were alarmed by the rapid transformation of society, as a fatuous rejection of progress.

Likewise, is it possible that Republicans have been guilty of turning religion (be it guns or Jesus) into a test of citizenship—a test that demands conversion?

I think maybe yes. And if so, that’s the fundamental problem, because both sides are doing something to denigrate difference. Unless we all dignify difference, no social covenant can hold. Says Jonathan:

‘Covenants exist because we are different and seek to preserve difference, even as we come together to bring our several gifts to the common good.

…This is not the cosmopolitanism of those who belong nowhere, but the deep human understanding that passes between people who, knowing how important their attachments are to them, understand how deeply someone else’s different attachments matter to them also.’

Re-Founding America

Having just spent a whole week in Washington, D.C., I can breezily say that the best way to heal America’s divisions is for everyone to go back to Philadelphia, hold hands together, and rededicate themselves to their shared moral project: to recognize human equality and oppose sameness.

Of course, I doubt either side is ready to lay down arms and make a new covenant just yet. War is clarifying. It divides the world into us and them. Peace is the confusing part. It provokes a crisis of identity. In order to make peace with the other, we must find something in common between us, worthy of mutual respect.

Right now that’s a tall order—not just in U.S. politics, but in the domestic politics across the democratic world. (‘Populist’ isn’t a label that’s intended as a sign of respect.) But maybe all of us can start asking some good questions, wherever we are, that (a) make us sound smart, but also (b) start the conversation in our community about the covenant to which we all owe a ‘higher loyalty’:

1. Tribalism (i.e., my particular group’s ideas dominate) won’t work. Universalism (i.e., one human truth overrides my particular group’s ideas) won’t work either. So what can?

2. ‘Those who are confident in their faith are not threatened but enlarged by the different faith of others.’ (Jonathan Sacks) Are we feeling threatened? Other than attacking the other, is there another way to restore our confidence?

3. Is all this week’s coverage of James Comey’s new book helping to bring people closer to his main message (‘There’s something higher that spans our differences and makes us one society’), or drawing us further away?

 

Admittedly, that last question is purely rhetorical—we all know the answer—but somehow a list feels incomplete unless it has three items. 🙂

Brave voyages, to us all,

 

Chris

Map #27: Fixing Fake News

Map #27: Fixing Fake News

I wonder, do you ever share my feeling that ‘fake news’ and ‘post-truth’—these phrases that get thrown about every day by the commentariat—cloud our understanding rather than clarify it? To me, such phrases—frequently used, fuzzily defined—are like unprocessed items in the inbox of my brain. I pick them up. I put them down. I move them from one side of my desk to the other, without ever really opening them up to make sense of what they are and what to do with them.

One of my friends, Dr Harun Yilmaz, finally got tired of my conceptual fuzziness, and he and a colleague wrote a brief book to tell me what fake news is, how the post-truth era has come about, and how to win public trust nowadays—for power or profit—now that the old trust engines are broken. It’s now become my little bible on the subject. (And although I’d rather keep the insights all to myself, he’s just published it as an affordable e-book called Marketing In The Post-Truth Era.)

(I’ve never met Harun’s co-author, Nilufar Sharipova, but Harun and I go back many years. We did our PhDs side-by-side at Oxford. While I studied China, he studied the Soviet Union—specifically, how Soviet politicians and historians constructed national stories to give millions of people an imaginary, shared past that helped explain why the USSR belonged together. (I’m sure his thesis was very interesting, but I mostly remember how Harun bribed his way past local Kazakh officials with bottles of vodka to peek into their dusty Soviet archives.)

Harun’s been studying ‘fake news’ and ‘post-truth’ since it was still good ol’ ‘propaganda’—and that, from the very best in the business.

The Rise Of Fake News, In Three Key Concepts

1. The Truth Machine

To understand the post-truth era, Harun would say, we first need to understand the prior, truth era. Even back in the truth era, pure truth or real news never existed. Whether we judged a message to be true depended on (a) how the editor of the message presented it and (b) how we the viewer perceived it.

Here’s a visual example of (a) from the Iraq War. With the same picture (middle frame), I can present two completely different messages, depending on how I crop it.

Everything you read in a newspaper or hear on a radio, every question asked and answered, is the outcome of a human decision to accord it priority over another item. (Simon Jenkins, Columnist, The Guardian)

What about (b)? How did we perceive ‘the news’ in the era before some people started calling it ‘fake’? The honest answer—for me, at least—was that I mostly took the news to be something ‘real’.

In the post-truth era, when ‘fake news’ has now become a frequent problem, ‘critical thinking’ has become a frequent antidote. Given social media, which allows anyone to say anything to everyone, we need to educate ourselves and our children to think critically about everything we see, read and hear.

Harun would say, we needed—and lacked—this skill back in the age of mass media, too. Yes, the power to speak to large audiences was more concentrated. (You needed a broadcasting license, a TV station, a radio station, a newspaper or a publishing house—and not everyone had those.) But that concentration of power didn’t necessarily make the messages these media machines churned out more trustworthy. That was our perception—and, arguably, our naïvete.

(Here’s a personal anecdote to think about. Back in the mass media age, when I lived in China, a highly educated Chinese friend of mine argued that Chinese audiences were far more media-savvy than Western audiences. They at least knew that everything they saw, read or heard from mass media had a specific editorial bias. In China, you didn’t pick up the daily paper to read ‘the news’; you picked it up to infer the Communist Party’s agenda and priorities.

Now, to be fair to the ‘Westerners’ among us, we were never completely naïve consumers of mass media. I knew that the New York Times had its bias, which was different from the Wall Street Journal. But it’s also true that I never saw, nor enquired into, the editorial process of either. Who decided which stories were newsworthy and which weren’t? Exactly what agendas, and whose agendas, were being served by the overall narrative?)

The point is, even in the truth era, truth was something manufactured. And ‘The Truth Machine’, as Harun and his colleague call it, had three parts: experts, numbers and mass media. When orchestrated together—the experts say, the numbers show, the news reports—these three sources of legitimacy could turn almost any message into ‘truth’.

Throughout the 20th century, governments all over the world used The Truth Machine to dramatic effect: policy priorities were fed in one end, and popular support came out the other. (For CIA history buffs, Harun gives a great example from the 1950s. In 1950, Guatemala overwhelmingly elected a new president who promised to wrest control of the country’s banana-based economy back from the United Fruit Company (an American corporation that owned all the ports and most of the land) and return control to the people. United Fruit and the U.S. government deployed experts, swayed journalists, staged events and made up facts to help the American public reframe the situation in Guatemala as a communist threat to American values and democracy. The new Guatemalan president had no links to the Soviet Union, yet when the CIA helped to remove him via military coup in 1954, public opinion in the U.S. held it up as another victory in the Cold War.)

Businesses, too, have been using The Truth Machine for decades to wrap commercial messages in the legitimacy of ‘truth’. A serious-looking dentist in a white uniform (expert) advises us to use Colgate toothpaste. Mr Clean bathroom cleaner kills 99.9% of bacteria (numbers). And we’re shown these advertisements over and over again (mass media).

2. The Funhouse

The more accurate way to think about our ‘post-truth’ problem today, Harun argues, is not that ‘real news’ has suddenly become drowned out by ‘fake news’. Rather: whereas once there was only one, or very few, Truth Machines operating in society, now there are many. And they’re working against each other, spewing out contradictory truths. The Truth Machines themselves have become a contradiction, since the more competing truths they manufacture, the more they undermine public trust in the authority of numbers, experts and mass media.

We cannot simply trust convincing-looking numbers anymore, because we are now bombarded with numbers that look convincing. We cannot simply trust experts anymore, because we are now bombarded by experts telling us contradictory things. We cannot trust mass media anymore, because mass media is just full of experts and numbers—which we know we can’t simply trust anymore.

The Truth Machine is broken, and so it’s like we’ve gone to the amusement park and stepped inside ‘The Funhouse’—another great metaphor, courtesy of Harun and Nilufar. In the truth era, we assumed that individuals would read and listen to different messages and make a rational choice between them. Now, multiple, contradictory truths create so much confusion that individuals start to doubt everything, like in a hall of mirrors. People think, ‘There is no way for me to know what is objectively true anymore.’

3. The Group

What we all need is a new source of sincerity. And we’re finding it: within our social reference group. It’s our first and last refuge of belief and principle about what is true and what is untrue. Rational analysis has become unreliable, so we are reverting to our oldest strategy for making sense of our world.

Groups as trust machines

The simple fact is that we are social animals. And so ‘groups’ are a real, natural, organic part of our lives. Social science is full of simple experiments, going back to its beginnings, that demonstrate how our group influences how we as individuals think and behave.

(One of the oldest and simplest experiments was conducted in the 1930s by one of the founding fathers of social psychology, Muzafer Sherif. He put participants alone in a completely dark room, except for a single penlight at the other end. He asked each person to estimate how much the point of light moved. (In fact, the light didn’t move at all, our eye muscles fatigue and twitch whenever we stare at something long enough, and those twitches cause us to see movement where there isn’t any.) Individual guesses varied widely, but once the participants got together, those whose guesses were at the high end of the range reduced theirs, and those whose guesses were at the low end raised theirs. Take-away: The group norm becomes the frame of reference for our individual perceptions—especially in ambiguous situations.)

The same technological forces behind the breakdown of The Truth Machine are also behind the rising power of groups. Organic social groups can form more easily now—around shared passions and experiences—than was previously possible. Small, scattered communities of interest have become global networks of like-mindedness. Coordinating messages and meetups, once expensive and difficult, is now free and frictionless. And social groups can filter more easily now, too, creating echo chambers that reinforce opinions within the group and delete dissonant voices.

Making Group Truths

While some of us bemoan the ‘polarization’ or ‘Balkanization’ of public opinion, some influencers—politicians, advertisers—are simply shifting strategies to better leverage this re-emerging power of group trust. More and more influencers are figuring out that, although the old Truth Machine is broken, a new ‘Truth Machine 2.0’ is born. In this post-truth era, a manufactured message can still become trustworthy—if it reaches an individual via a group.

In fact, this new Truth Machine generates more powerful truths than the old Truth Machine ever could. There was always something artificial about the truths that the old machine manufactured; they came at us via those doctors in lab coats and news anchors sitting behind their news desks and pretending to scribble notes behind their news desks. But these new truths come at us organically—with fewer traces of the industrial process that spawned them.

Harun points to the ‘Pizzagate’ episode during the 2016 presidential election—maybe the wildest example of the power of this new-and-improved truth machine. Stories had circulated on social media that Hillary Clinton and other leading Democrats were running a child trafficking ring out of a pizzeria in Washington, DC. In December 2016, one proactive citizen, a 28-year-old father of two, burst into the pizzeria with his AR-15 assault rifle to free the children. He fired shots at the fleeing employees, then searched for the children. He became confused (and surrendered to DC police) when he didn’t find any.

The mainstream media debunked the child-trafficking story—which, for some, only confirmed its truth. According to public opinion polls at the time, 9% of Americans accepted the story as reliable, trustworthy and accurate. Another 19% found it ‘somewhat plausible’.

Is that a lot? I think it is: with almost no budget, no experts, no analysis, no media agency, an absurd fiction became a dangerous truth for millions of people.

Marketing Group Truths

Harun’s book with Nilufar is aimed at businesses—to help marketers rethink marketing in an age when the public has lost trust in conventional messengers. And this age does demand a fundamental rethink of the marketing function. In the industrial era, business broke consumer society into segments. We were ‘soccer moms’ and ‘weekend warriors’, ‘tech enthusiasts’ and ‘heartland households’. These segments weren’t organic. They weren’t real groups that its members identified with. They were artificial, rational constructs meant to lump together people with shared characteristics who would perceive the same message similarly. And they worked, so long as The Truth Machine worked.

‘Group marketing’ (a deceptively simple term that holds deep insight) accepts that experts, numbers and mass media are losing their authority to sway our choice-making. We just don’t trust these mass-manufactured truths anymore. But we do trust our group(s). And so, more and more of our buying decisions are based on the logic, ‘I’ll buy this because my group buys it.’

Within this growing phenomenon, Harun and Nilufar have clarified an important new rule in how to create successful brands. It used to be that a company had a Product, attached a Story to that product, and this P+S became a Brand that people Consumed. P+S=B, and B –> C.

Group marketing demands a new equation. The stronger the corporate Story, the less freedom groups have to tell their own stories with a Product, and the less useful it is to the group as an expressive device. So the goal is to get the Product into the Group’s hands with a minimum of corporate storytelling. Instead, let the Group build the Brand as the sum of its members’ Individual Stories. Harun and Nilufar compiled several successful examples, my favorite of which is how Mountain Dew infiltrated skateboarding groups in Colombia. (Look for this tactic, and you start to see it everywhere…)

Truth As A Disease

To repeat myself: more and more of our buying decisions are based on the logic, ‘I’ll buy this because my group buys it.’

What worked for Pepsi’s Mountain Dew product also worked for Cambridge Analytica’s political messaging. Ideas were manufactured, planted into groups, and accepted by group members as truth because the ideas came to them via the group.

This is where business and politics differ. Businesses can adapt how they persuade consumers to buy things to this new group-centric approach, and the economy will still function fine. It’s less clear that we can say the same about our politics.

Liberal democracy isn’t built to operate on truth. It’s built to operate on doubt. Liberal democracy is an Enlightenment project from the Age of Reason. It assumes that truth cannot be known in advance (a priori, as the philosophers say). Instead, society must grope toward the truth by making guesses—and being sensitive to what the people find out along the way. Democracy is an exploration. It depends upon a shared commitment to discovery.

Now, thanks to all these competing Truth Machines, a pre-Enlightenment culture of truth is returning—and spreading. It is a blight that threatens the whole ecology of our political system. When too many people believe they have found truth, democracy breaks down. Once truth has been found, the common project of discovery is complete. There is no more sense in sharing power with those who don’t realize it. There is no more sense in curiosity, in new evidence.

Curing Ourselves Of Truth

To rescue the possibility of groping toward Paradise democratically, we need to inject our own group discourses with doubt.

I don’t know how we manage that feat. (But I’m open to suggestions!) I only know that it’s the logical answer. If an idea is foreign to the group, the group rejects it. Therefore, only group insiders can introduce the group to doubts about its own shared ‘truths’.

Only Nixon could go to China.

And so (I bet you thought you’d never hear this one), the world needs more Nixons.

 

Map #26: Finding The Real In A Post-Truth World

Map #26: Finding The Real In A Post-Truth World

Our Shared Awareness Of Atomization

I’m guessing we all know the sensation of being detached, somehow, from the whole: when we catch ourselves in the act of reaching impulsively for our mobile phone and feel an idle guilt about our addiction to consuming content that somehow feels closer to junk food than vegetables; when we give meditation a try, find it helpful for some inexplicable reason…and then struggle to find the time to meditate again; when we get out of the city for a holiday, widen our vistas, and then feel oddly unfocussed for the first few days back at the office.

(I’ve just experienced the latter. This past week, I went cross-country skiing with an old friend in the Austrian Alps. At the top of a long uphill climb, we paused to catch our breath and take in the view. The air was perfectly still. The sky was a cloudless blue. The mountain peaks were a brilliant white. I closed my eyes and felt the sun warming my closed eyelids. I could hear a few birds singing in the surrounding forest; off to my right, i could hear pine needles crackling as they melted free of their snowy cocoons. I heard my breath. I felt it. For no particular reason, I was profoundly happy.

…and since returning to London, it’s taken me a solid two or three days of circling around my laptop to recover the focus I need to write.)

Enterprising minds have spotted our discontent with disintegration and turned reintegration into an industry. Grocery delivery services here in London emphasize, variously, ‘fresh’, ‘simple’, ‘organic’ or ‘mindful’. Meditation apps are booming. Yoga makes you balanced. Electric cars make you clean. To restore lost relationships — with our food, ourselves, our community, our environment, with the truth — has become one of the most compelling stories reshaping consumer behavior.

We shouldn’t be surprised that it has become one of the most compelling stories reshaping politics, business and society, too. Economists, sociologists, scientists, tech titans and politicians today all ply us with the need for, or the promise of, restoration. (Start to listen for it, and you start to hear it everywhere…)


An Autopsy Of Our Mind

A couple letters ago, I shared a brief scan of how different researchers across the social sciences today explain why society is disintegrating, and what to do about it. Every branch of social science offers part of the diagnosis, and part of the cure.

Their diagnoses all relate to the fragmentation that is happening ‘out there’, in the external world. But, as we’ve all experienced, the fragmentation is also happening ‘in here’. A deeper disintegration is underway, at the level of our consciousness.

This deeper disintegration is hard to research. It doesn’t yield data the same way that, say, economic inequality does. Yes, we can point to plenty of indirectevidence. The extreme cases show up in our public health statistics — rising rates of youth suicide (here in the UK, suicide is the leading cause of death among people aged 20–34), the opioid epidemic and other substance abuse and soaring numbers of mental health cases, for example. But we cannot cut open our minds to perform an autopsy; we cannot compare the brain of a youth twenty-five years ago with the brain of a youth suicide victim today and observe how that person possessed a greater sense of belonging-to-something than this person did.

Because this internal reality of disintegration is hard to show empirically, it’s hard for us to accept it as ‘real’. (Wherever we live, we’ve all witnessed the slow struggle for society to take mental illness seriously and to overcome the stigma that’s been attached to it.) And yet, it clearly is real. We’ve all felt it. We all know the behaviors, the hungers, that it can drive. We all know the fleeting bliss that a sense of reintegration can generate.


Getting Real

To better understand the disintegration that right now seems to be taking place between our own ears, I’ve been reading a book by Jean Gebser called The Ever-Present Origin. It’s basically a history of consciousness — a history of how different cultures throughout history have had different awarenesses (if that’s a word). It’s a thick book. It’s a dense book. I wouldn’t exactly recommend it, frankly, except that it’s one of the most important books in post-modern philosophy. I hesitate even to write about it, because it will take me several more years to digest. But it is mind-blowing. Like Yuval Harari’s Sapiens, but less accessible and more insightful.

Gebser (1905–1973) was a German philosopher and linguist, and he first published the book in 1949. It’s obvious that he was heavily motivated by the fresh scars of World War II and by the looming threat of all-out nuclear war, from quotes such as:

The present is defined by an increase in technological power, inversely proportional to our sense of responsibility…if we do not overcome this crisis, it will overcome us…Either we will be disintegrated and dispersed, or we must find a new way to come together.

and…

The restructuring of our entire reality has begun; it is up to us whether it happens with our help, or despite our lack of insight. If it occurs with our help, then we shall avoid a universal catastrophe; if it occurs without our aid, then its completion will cost greater pain and torment than we suffered during two world wars.

But, as anyone who invests years to write a book must, Gebser did possess some hope that a brighter future lay ahead:

Epochs of great confusion and general uncertainty…contain the slumbering, not-yet-manifest seeds of clarity and certainty.


Make Me Whole Again

Gebser’s hunch was that we can’t solve the disintegration that’s underway ‘out there’ without also solving the disintegration that’s underway ‘in here’. We won’t solve the external crises of fake news, or inequality, or political extremism, or ecological crises, without also solving our internal crises of anxiety, emptiness, self-absorption and confusion.

That’s because, for Gebser, today’s external and internal crises are two sides of the same mistake, namely that ‘we have conceded the status of ‘reality’ to only an extremely limited world, one which is barely one-third of what constitutes us and the world as a whole.’

In other words, the root of our disintegration today is that we’ve denied the reality of everything that could restore our sense of belonging, of integration, of harmony, with our selves, each other and the world.

A Brief History Of Consciousness

The one-third of reality which we do accept as real is the mental. This is the reality of measurable space-time; of measurable cause-and-effect; of time broken into past, present and future; of calendars, goals and project plans; of Cogito ergo sum, I think therefore I am.

And the two-thirds that have gone missing? They are older, earlier aspects of our consciousness that we dismissed in order to give primacy to our modern, mental awareness.

The first, Gebser calls ‘the magical’. The magical is the spaceless, timeless oneness that I sensed last week in the Austrian Alps, or whenever we still gaze up at a starry night, or pray in a crisis moment, or whenever we lose ourselves in the beat of the music that’s playing. Nowadays, we are deeply suspicious of anything labelled ‘magic’. But there was a time in human pre-history when everything in our awareness was magical. We had no notion of using measurable space and time to separate cause and effect, and so everything that happened seemed connected to everything else. Rain dances made rain; curses punished wrong-doers; an arrow drawn on a cave painting ‘killed’ the buffalo before the hunt even began. In the magical phase of human consciousness, reality was one big unified thing within which we must listenin order to survive. (I hear, therefore I am.)

The second aspect of reality that we’re missing today, Gebser calls ‘the mythical’. Mythical consciousness first began when we discovered that the oneness of nature was, in a lot of ways, more like a circle. Natural events recur, rhythmically. Once this awareness of ‘recurrence’ became part of our reality, reality became, not just a oneness, but a polarity: day and night, summer and winter, birth and death, yin and yang. We became aware that the polarity of nature extended into us: the body and the soul. We began to weave events, objects and people together into stories that gave reality greater coherence — that made all the recurrences and balances fit together. We imagined ourselves as heroes in these stories; we imagined life as a hero’s journey; we shared collective dreams as a community. In the mythical phase of human consciousness, the world became a story in which we must speak in order to survive. (I speak, therefore I am.)

For Gebser, our third aspect of consciousness, the mental, emerged when we began to go off-script (around 2,500 years ago). Instead of finding our roles within the stories inspired by nature’s patterns, we began to ad lib our own intentions and journeys, by drawing instead upon something inside ourselves. Our mythical awareness of nature’s polarity was replaced by our mental awareness of a duality: us, outside of nature.

Once we stepped outside of nature, we could begin to direct our own lives. Time, which in the magical world had been one single big moment and in the mythical world had been a circle we traced over and over again, became the line (past, present, future) along which we played out our individual intentions. Time was now finite for us, and measuring time — conquering time! — began to matter. Space, which in the magical and mythical worlds had been irrelevant to the fulfillment of our lives, now imposed itself as a limit on how far we could go. Space became finite for us, and measuring space — conquering space! — began to matter.

If you grasp that last paragraph, then you’ve grasped the past 2,500 years of how our sense of ‘reality’ has been changing. In short: we’ve been getting better and better at measuring space and time, which (a) gives us more and more power to exert our own intentions over nature but also (b) draws us further and further away from the oneness of space and time that we used to know intuitively.

(To drive this point home, Gebser offers two seminal examples: the discovery of linear perspective during the Renaissance, and the discovery of space-time in the late 19th and 20th century — which is what got me re-reading Stephen Hawking. They’re fascinating examples, and I’ll digress into them at the bottom of the page, if you’re interested.)


Finding The Real In A Post-Truth World

Fast-forward to today, and Gebser’s history of human consciousness gives us a fresh lens for understanding the biggest changes underway today.

Take the mega-problem of post-truth politics. Why do once-powerful arguments based on facts and evidence suddenly seem powerless? For Gebser, this is a familiar pattern of exhaustion. As myth replaced magic, the power of magic spells weakened into mere bewitchment, and finally into empty rituals and superstition. As mind replaced myth, the epic explanations for everything became mere stories and entertainment.

And now ‘facts’ are becoming mere ‘alternatives’.

Our instinctive reaction (mine, anyway) is to leap to the defense of Reason. We must re-educate ourselves on how to think critically, how to recognize bias, how to apply logic and to be ruled by the knowledge that emerges from scientific methods. We must put wishful thinking and tribal tendencies back in their bottles — through heavy regulation, if necessary.

Except we can’t, Gebser would say. That is precisely the conceit that led to the shock of a President Trump and a Brexit vote (or, he argued in his own lifetime, to two World Wars).

Among American voters in 2016, Donald Trump won hearts, not minds. He didn’t give any reasoned arguments. He spoke instead in mythic terms about an imaginary America under siege. He held up tribal totems — the flag, guns, male aggression. In a recent New York Times piece, the columnist David Brooks bemoaned this neo-tribalism. Gebser would say: it has always been part of us.

The magical and the mythical are real, Gebser would explain, and that is the lesson that we need to take away from the shock events of recent years. Not real in the same way that we measure space and time, but real in our consciousness nonetheless. Modern, mental humanity gets very uncomfortable at the insinuation that reality has magical and mythical aspects. We deny the possibility. But, Gebser argues, that only makes us fools. ‘Those who are unaware of these aspects, fall victim to them.’

The resurgent power of magic and myth in society is a sign that our Age of Reason — the age of mind over everything — is reaching exhaustion. The project was flawed from the beginning, Gebser would say, because we can no more purge the magic and mythical from our reality than we can purge them from our language. Every time that we feel ‘disconnected’, or ‘unbalanced’, or feel anxious that we’ve ‘run out of time’, we betray our yearning to get back to the original oneness of space and time that’s now been completely carved up by rational thought.

In this moment of mental crisis, Gebser predicted, ‘soon we will witness the rise of some potentate or dictator who will pass himself off as a ‘savior’ or prophet and allow himself to be worshipped as such.’ (I’d say we’ve reached that point.)

But that prophet is false. He is, in Gebser’s words, ‘less than an adversary: he is the ruinous expression of man’s ultimate alienation from himself and the world.’ (Sounds about right.) He demonstrates that the latent, neglected power of magic and myth can still move us powerfully, but he does so by lashing out at our mental reality. In the end we’re left more fragmented.

The healthy response in this post-truth age can’t be to deny what reason has revealed to us. And it isn’t to purge magic and myth, either. (We can’t, and more to the point we shouldn’t, since doing so would also purge all emotion and inspiration.) Instead, Gebser thought, we need to ‘renounce the exclusiveclaim of the mental structure’ over what’s real, and reintegrate the magic and the mythic into our consciousness.

‘Like all ages, our generation, too, has its task.‘ It is to learn to see ourselves ‘as the interplay of magic unity and mythical polarity and mental conceptuality and purposefulness. Only as a whole person is a person in a position to perceive the whole.’


So…Where To Begin?

I’m going to be chewing on all this for a long, long time. But my immediate take-aways are these:

  • Trust our magic and mythic impulses more. These impulses are everywhere today in our consumer society, in art, in science. Even corporate executives have started talking about making their companies more ‘soulful’. At the same time, we hesitate to follow them, because we don’t understand the rational basis for these impulses. Well, all the above gives us that rational basis, in a meta-sort-of-way. So we should ‘go with them’, and feel more sure about doing so. So long as we bring our mental awareness along with us, we won’t slip into New Age pseudo-spiritualism. We’ll end up somewhere more real.
  • It’s time to get past gawking at the inconsistencies and ignorance of the Donald Trumps of the world. Their ignorance is irrelevant to their power, and it’s precisely that power that we need to understand and integrate — in a healthier way — into our politics.
  • In history, periods of general confusion and anxiety ultimately arrived at new clarity and certainty. The more ‘awake’ we can be to the conflicts inside us, the sooner we’ll all get there. I like this quote a lot: ‘Our sole concern must be with making manifest the future which is immanent in ourselves.’ That’s deep.

Thoughts?

 


How We’ve Conquered Space And Time

Gebser offers two seminal examples. The first was the discovery of linear perspective during the Renaissance — pioneered by the Italian artist Felipe Brunelleschi, and perfected by Leonardo da Vinci. Linear perspective creates the illusion of depth — of a third dimension — on a two-dimensional surface.

How could a new style of drawing be of historical importance? It makes no sense, until you try to imagine what it was like to try to conquer space without it. Space is three-dimensional. If you don’t have any way of communicating ideas in three dimensions, then space is difficult to master. No two-dimensional picture of the human anatomy can prepare a medieval doctor for what he finds when he cuts open a patient; he can only learn from cadavers — and his own experience. No two-dimensional drawing of a long-standing tower can explain to an architect how to build it; she can only mock-up a model, and hope that her real-life version stands the test of time, too. No two-dimensional drawing of a water wheel, or a clock, or even a knot, can reliably show a novice how to make one; he can only apprentice himself to a master and watch how it’s done.

But da Vinci’s drawings — for complex machines, for giant statues, for soaring bridges — can be followed, even centuries later, to bring his ideas into three dimensional reality.

Until we had a technology to reliably represent space, the reality of space was a sort-of prison that trapped our ideas. But with the advent, and perfection, of linear perspective, suddenly space became our prisoner.

The second example Gebser offers was the discovery of space-time in the late 19th and early 20th century. Basically, we figured out how to think about time as a fourth dimension of space. Mathematicians call these conceptions of 4-and higher dimensions ‘non-Euclidian geometries.’ Just as linear perspective helped us to measure and conquer space, our ability to represent time as ‘just another dimension’ improved our powers to measure and conquer time.

(When Stephen Hawking passed away recently, every newspaper in the world ran an obituary. None helped us to understand the significance of his most famous book, A Brief History Of Time. That book was all about trying to help the rest us understand how physicists think about time as a fourth dimension — and why being able to do so makes a whole new era of scientific progress possible: from nuclear power to mobile phones to quantum computers.

 

Map #24: Voyages Into The Unknown—And The Familiar

Map #24: Voyages Into The Unknown—And The Familiar

Donald Trump can become President of the United States. Boris Johnson can become Foreign Secretary of Britain. Silvio Berlusconi, whose bongo bongo parties once secured his status as the most debauched leader in the democratic world, is back on top of Italian politics. Far Right extremists can win seats in the German Bundestag.

This is the new political world we are in. How did we get here?

Theories abound. If 2016 was the year of shock, then 2017 was our year to gawk. By 2018, a whole industry of handwringing has started up to explain to us how our expectations had come to be so blind to reality. And now this industry is flourishing.

The clearest theories of ‘how we got to now’ are those put forth in the academic literature, where the rules of debate are explicit and where disagreements are dressed in politeness (more like pistols-at-dawn than revenge porn). Fake news exists in academic publications, to be sure, but much fakery is filtered out by very clear rules about what you can and cannot say to support your views. You can’t say ‘In my opinion…’, for example. You can’t say ‘You are entitled to your facts, and I am entitled to mine.’ More precisely, you can say such things, so long as you can accept the laughter, ridicule and—most damning of all—anonymity—that will follow. In the academy, unless you can say ‘The evidence suggests…,’ and unless you can cite evidence that you believe supports your suggestion, your views will gain few followers.

(The academy pays a price for this clarity, and that price is truth. Academic literature contains no truth. It contains only theories—theories that happen to fit the available facts. Academics may persuade themselves (they may even persuade other people!) that they are right, but unlike the righteousness that priests, prophets and politicians might enjoy, academic righteousness is always at risk: some future facts might prove their beloved theory wrong.)


How Social Science Thinks

In the physical sciences, facts are arranged in a causal flow. To oversimplify: All biology is, ultimately, chemistry. All chemistry is, ultimately, physics. Physics is the fountainhead. So if physicists discover a new fact about reality, then all the scientists working on problems downstream—the chemists and biologists—might need to re-examine their own theories to make sure they still conform to the upstream story.

But ‘How did we get to the new political world we are in?’ is a question for social science. And in the social sciences, it’s unclear where causation begins. Economists love to measure productivity and count money, and theorize about how the economy can explain everything else. Political scientists love to run regressions on election results, and theorize about how politics can explain everything else. Sociologists love to identify the shared ideas that differentiate groups—groups that, say, began with the exact same resources but somehow ended up in opposite situations. Those shared ideas—you guessed it—can explain everything else.

The messy reality, of course, is that the causal flow of social change is not linear. It is, instead, a braided stream:

Social change happens through multiple channels that divide and recombine. Causal flows converge here and diverge there—sometimes reinforcing each other, sometimes cancelling each other out. (I stole this metaphor from my doctoral supervisor, Vivienne Shue, and her latest book, To Govern China.)

Across the academy of social sciences, each discipline is trying to retrace the winding ways that led us to this new and unfamiliar world.


It’s The Economy, Stupid

Economists retrace our voyage into the economic unknown. Globalization is shifting the balance of economic power globally from the Atlantic to Eurasia. Automation is worsening the imbalance of economic power within our economies between labour to capital. Economic growth, which has been driving progress across Europe and North America since the Industrial Revolution, is slowing down, and evidence suggests that it might never recover lost momentum. That evidence includes an aging population, diminishing returns from education and weaker-than-predicted productivity gains from the digital revolution. At the household level, costs of housing and living are soaring, inequality is widening, consumer debt is ballooning, and a looming robo-calypse threatens to eliminate half of all present-day jobs.

These economic facts combine to ask us: ’Is progress still possible?’ For society, it’s a big question. If we lose faith in our collective story of economic progress, do we become less tolerant of one another and more divisive? The theory is that if we start believing that the pie won’t get any bigger than it is today (and might even start shrinking!), then social solidarity comes under strain. We become hostile to the idea of sharing and more focussed on making sure that we eat our fill first. Sounds a lot like our present-day trade and immigration debates.


It’s Politics, Stupid

Political scientists chart our drift into unfamiliar political territory. In the U.S., spending money now counts as constitutionally protected speech. Too much money in politics means that politicians need the support of big finance or big business or billionaire egos to get re-elected—as much as, or more than, the support of voters. That legislative capture, combined with the offshoring and automation of the old industrial economy, is leading to the postindustrial collapse of union power and labour movements. The consolidation of local media into big, sometimes foreign-owned, conglomerates has led to the vanishing of working-class, street-level issues from the public eye. Is it any wonder that trust in public institutions is plummeting across the advanced democracies, or that China’s alternative model shines brighter by the year to emerging economies and the strongmen who lead them?

These political facts combine to ask us: ‘Does democracy still work?’ Is ‘one person, one vote’ a promise or a lie? We see these doubts being voiced, forcefully, across the advanced democracies.


It’s Social Change, Stupid

Sociologists chronicle our recent expeditions beyond the boundaries of all known social experience. ‘Liberal democracy’ is leading us toward tipping points that challenge our commitment to liberalism—perhaps more seriously than at any time since the French Revolution or the U.S. Civil War. In North America, the aging of immigrant populations of European ancestry, alongside annual inflows plus higher birthrates from today’s non-European migrants, is slowly shifting demographic facts. Dominant racial, ethnic and religious groups are losing their grip on cultural primacy. On cue, culture wars are breaking out, over gay marriage, feminism, religion, guns. The core tenet of liberalism—that, however compelling our tribal instincts may be, rationally we know that we all share in a universal humanity—now sounds naïve in the same societies that once trumpeted it.

These social trends together beg the big question: ‘Who owns the future?’ And we see this question being fought over within every advanced democracy.


It’s Technology, Stupid

Media theorists are mapping our journey into the technological unknown. Our present institutions and habits of democracy developed within a culture of print media. They developed in an ‘Age of Reason’, when truth was no longer jealously guarded by Church and State, but instead was made accessible to every man (not women, back then) with the ability to read. Rational thought is the unique human capacity that separates civilization from the state of nature, and it is the justification for giving you and me a vote, and for protecting the ‘public sphere’ with rights to speak, to publish and to assemble. This idealized view of democracy formed the basis for a system of government that worked, somewhat—at least, better than the alternatives.

But now, with social media, big data and smart algorithms, we have shattered the public sphere into a billion individual shards of glass. No ‘national conversation’ connects us anymore, yet all of us, with our own shard of glass, can poke our neighbor’s eye. Can discourse still be rational in a medium where the audience to the lie can easily outnumber the audience to the truth? Is ‘popular sovereignty’ still possible in a medium that easily admits foreign interference?

In short, ‘Is democratic discourse still viable?’ It’s the unsettling question that lurks inside all our smartphones.


Calling all theorists

Even from this quick-and-dirty sketch of the terrain, some common features show up:

The world still makes sense

Hindsight is always 20/20. Even so, it’s comforting to know that the world (no matter how new or unfamiliar it may seem) still does make sense—once we shift the facts that we pay attention to. Is this a moment of accelerating progress, or of deepening malaise? Since the 1990s—the fall of the Berlin Wall, the collapse of the USSR, the founding of the WTO and China’s joining up with it, the advent of the World Wide Web—the mainstream focussed mainly on the dramatic gains being made: economically, politically, socially. Now the losses and the system stresses loom much larger in everyone’s thinking.

Either way, WE are the cause

An obvious theme running through every causal tale being told by social scientists is that we have done this to ourselves. There is no alien force, no Act of God, no extra-solar asteroid, to blame. We—that is to say, society—is somehow responsible for the gains and the losses. And so it’s no wonder that the ‘elites’ among us (which is to say, anyone who gained, or who stood in a position of power, during this period of change) have become the chief object of popular rage.

‘Change’ is the new axis that divides us

Given that we ourselves are the reason we’ve sailed into this unfamiliar territory, the choice before us is clear. Option One: To reverse course. To revert to the familiar way things were in our idealized memory. Option Two: To burn our ships. To demand of ourselves that we adapt to a new world.

‘Change’ is now the most important concept in our politics. More of it or less of it, forward or back, Liberal or Conservative: this is the debate that now animates society. It is far more relevant right now than the political debates of ‘Left’ vs ‘Right’—even to political parties themselves. In the U.S., for example, the Democratic Party is split between those who still see progressive possibilities in immigration, trade or technological disruption, and those who now want to slow down these trends for the sake of those who have been left behind. Republicans are divided, too. Some still see change as ‘creative destruction’ that generates wealth for those willing to work for it. Others now see change as a threat to a way of life, laying waste to traditional industries, traditional values and traditional communities.

If that’s right—if ‘change’ really has become the main axis of our political differences—then, to face these differences squarely, we’re going to need to make an additional choice. It used to be that ‘Left’ and ‘Liberal’ were one and the same choice, more or less. So, too, with ‘Right’ and ‘Conservative’. Now, they’re distinct. If you choose Left, you still need to choose again: Liberal (Hillary Clinton-esque) or Conservative (Bernie Sanders-ish). If you choose Right, you still need to choose again: Liberal (George Bush-ophile) or Conservative (Donald Trump-ization).


Resign…Or Reinvent?

We are, in a sense, back at the beginning. More change or less change: this is the oldest debate in political history. It’s far older than our debates about Left vs Right, which began relatively recently, with the seating arrangements of our Assemblée Nationale, our House of Parliament, our Congress.

That’s frustrating. Surely by now, after several thousand years of civilization, we should be ready to move on to new questions.

But it’s also exciting. A return to our political beginnings is an opportunity to renew and refresh ideals way down at the bedrock of civilization. And it’s an opportunity to reinvent political parties, and political leadership, to face head-on the old question that once again divides us.

Map #23: Here Come The Avatars

Map #23: Here Come The Avatars

Just a short letter this week. I’m doing a bunch of interviews and podcasts in the U.S. at the moment, to coincide with the U.S. paperback release of Age of Discovery (Revised Edition). It’s hard to look at events in the U.S.—ranging from the Florida Parkland school shooting to the Trump Administration’s efforts to deport Dreamers—through a Renaissance lens. And often heartening, too.

(This is one of my favorite conversations so far, with American super-podcaster Scott Jones on his podcast, Give & Take.)

Thank you, all, for the flood of ideas in response to my letter last week. As you’ll recall, I’ve been searching for the best English-language equivalent to the Estonian concept of ‘kratt’, to help us have a clearer public conversation about A.I. in society—in particular, about the rights and responsibilities of soon-to-be-everywhere autonomous agents that will drive cars, buy groceries and manage stock portfolios on our behalf. Suggestions ranged from ‘butlers’ to ‘tin men’, and the idea I liked best came from my friend Ernesto Oyarbide: ‘avatar’.

Popular culture today probably associates the word ‘avatar’ most strongly with James Cameron’s 2009 Hollywood blockbuster by the same name (or, as the director himself called it, ‘Dances With Wolves in space’). I like the word because it meets the three criteria I set forth last week. To review, it:

  1. Captures the notion of an agent that represents, or is an extension of, my will;
  2. Omits the notion that the agent could formulate its own goals or agenda against my will; and
  3. Is instantly familiar, and thus intuitive, to a wide range of people.

The word itself originates with the Hindu notion that the gods can descend (the Sanskrit verb is ava-tara) to the human world by pouring their essence into another form. That original notion captures my #1 and #2 perfectly. As for my #3, the science fiction writer Neal Stephenson popularized the word as far back as 1992 with his bestseller, Snow Crash. In Neal’s book, real people controlled avatars in a virtual-reality world called the Metaverse. Since then, ‘avatars’ have become a common metaphor for ‘user IDs’ in many online communities. And the word will become even more recognizable once James Cameron releases all his Avatar sequels. (According to Vanity Fair, work has already begun on four sequels, to be filmed back-to-back-to-back-to-back through 2018, at a total budget of $1 billion.)

I also like the word because of its magical, mystical connotations. There is a branch of modern philosophy that traces the history of social thought and argues that civilization is due for a revival of magic. The Enlightenment ushered in an Age of Reason. Now, some argue, the pendulum is swinging back toward the spiritual, the mythical. (But that’s going to have to be the subject of a future letter…)

So thank you, everyone, and Ernesto, for ending my word-hunt. Here’s a prediction for you: by 2020, “Avatar law” is going to be a Real Big Thing — a serious branch of legal innovation, and probably a whole industry of punditry and startups as well. (If anyone wants to go further down this rabbit hole with me, let me know.)

Map #22: Catching Up With A.I.

Map #22: Catching Up With A.I.

The techno-optimists are driving AI forward.

And we, as citizens, are bombarded by the promises and portents of its consequences. AI will destroy our jobs. AI will eliminate drudgery and leave us more time to be creative. AI will solve our information overload. AI will save lives—on the road, in healthcare, on the battlefield. AI will end humanity. It will be our friend, says Bill Gates. It will be our enemy, says Elon Musk.

So which is it?

I’m still mentally unpacking from my trip to Estonia, the week before last. One of my most stimulating conversations that week was with Marten Kaevats, National Digital Advisor to the Prime Minister. Marten is a thirty-something thinker with shocking hair, a rambling, breathless rate of speech, and a knack for explaining difficult concepts using only the objects in his pockets. His job is to help the government of Estonia create the policies that will help build a better society atop digital foundations.

Marten and I had a long chat about AI—by which I mean, I nudged Marten once, snowball-like, at the top of an imaginary hill, and he rolled down it, gaining speed and size all the time, until flipcharts and whiteboard markers were fleeing desperately out of his path.

Here’s what I took away from it.


Fuzzy Language = Fuzzy Thinking = Fuzzy Talk

Marten’s very first sentence on the topic hit me the hardest: ’You cannot get the discussion going if people misunderstand the topic.’

That is our problem, isn’t it? ‘AI’—artificial intelligence—is a phrase from science fiction that has suddenly entered ordinary speech. We read it in headlines. We hear it on the news. It’s on the lips of businesspeople and technologists and academics and politicians around the world. But no one pauses to define it before they use it. They just assume we know what they mean. But I don’t. Science fiction is littered with contradictory visions of AI. Are we talking about Arnold Schwarzenegger’s Terminator? Alex Garland’s Ex Machina? Stanley Kubrik’s HAL in 2001: A Space Odyssey? Ridley Scott’s replicants in Blade Runner? Star Wars’ C-3PO? Star Trek’s Lt. Commander Data?

Our use of the term ‘AI’ in present-day technology doesn’t clear things up much, either. Is it Amazon’s Echo? Apple’s Siri? Elon Musk’s self-driving Tesla? Is it the algorithm that predicts which show I’ll want to watch next on Netflix? Is it the annoying ad for subscription-service men’s razors that seems to follow me around everywhere while I browse the Internet? Is that AI? If so, god help us all…

We don’t have a clear idea of what they’re talking about. So how can society possibly get involved in the conversation—a conversation that, apparently, could decide the fate of humanity?


We’re Confusing Two Separate Conversations

Society needs to have two separate conversations about ‘artificial intelligence’. One conversation has to do with the Terminators and the C-3POs of our imagination. This is what we might call strong AI: self-aware software or machines with the ability to choose their own goals and agendas. Whether they choose to work with us, or against us, is a question that animates much of science fiction—and which we might one day have to face in science-reality. Maybe before the mid-point of this century. Or maybe never. (Some AI experts, like my good friend Robert Elliott Smith, have deep doubts about whether it’ll ever be possible to build artificial consciousness. Consciousness might prove to be a unique property of complex, multi-celled organisms like us.)

The other, more urgent conversation we need to have concerns the kind of AI that we know is possible. Call it weak AI. It’s not capable of having its own goals or agendas, but it can act on our behalf. And it’s smart enough to perform those tasks the same as, or better than, we could do them ourselves. This is Tesla’s autopilot: it can drive my car more safely than I can, but it doesn’t know that it’s ‘driving a car’, nor can it decide it’d rather read a book. This is IBM’s chess-playing Deep Blue, or Google DeepMind’s AlphaGo: they can play strategy games better than the best human, but they do not know that they’re ‘playing a game’, nor could they decide that they’d really rather bake cookies.

Most present-day public discourse on AI confuses these two, very different conversations, so that it’s very difficult to have clear arguments, or reach clear views, on either of them.


A Clearer Conversation (If You Speak Estonian)

Back to my chat two weeks ago with Marten. What makes him such a powerful voice in Estonia on the questions of how technology and society fit together is that he doesn’t have a background in computer science. He began his career as a professional protestor (advocating rights for cyclists), then spent a decade as an architect and urban planner, and only from there began to explore the digital foundations of cities. When Marten talks technology, he draws, not upon the universal language and concepts of programmers, but upon the local language and concepts of his heritage.

Marten and his colleagues in the Estonian government have drawn from local folklore to conduct the conversation that Estonians need to have about ‘weak AI’ in language that every Estonian can understand. So, instead of talking with the public about algorithms and AI, they talk about ‘kratt’.

Every Estonian—even every child—is familiar with the concept of kratt. For them it’s a common, centuries-old folk tale. Take a personal object and some straw to a crossroads in the forest, and the Devil will animate the straw-thing as your personal slave in exchange for a drop of blood. In the old stories, these kratt had to do everything their master ordered them to. Often they were used for fetching things, but also for stealing things on their master’s behalf or for battling other kratt. ‘Kratt’ turns out to be an excellent metaphor to help Estonians—regardless of age or technical literacy—debate deeply the specific opportunities and ethical questions, the new rights and new responsibilities, that they will encounter in the fast-emerging world of weak AI servants.

Already, Estonian policy makers have clarified a lot of the rules these agents will live under. #KrattLaw has become a national conversation, from Twitter to the floor of their parliament, out of which is emerging the world’s first legislation for the legal personhood, liability and taxation of AI.


Translating ‘Kratt’?

Is there an equivalent metaphor to help the rest of us do the same? In 1920, the Czech science fiction writer Karel Čapek invented the word ‘robot’ (from the Slavic language word ‘robota’, meaning a forced laborer). At the time—and ever since—it has helped us to imagine, to create and to debate a world in which animated machines serve us.

Now, we need to nuance that concept to imagine and debate a world in which our robots represent us in society and exercise rights and responsibilities on our behalf: as drivers of our cars, as shoppers for our groceries, as traders of our stock portfolios or as security guards for our property.

I haven’t found the perfect metaphor yet; if you do, please, please share it with me. The ideal metaphor would:

  1. Capture the notion of an agent that represents, or is an extension of, our will;
  2. Omit the notion that the agent could formulate its own goals or agenda; and
  3. Be instantly familiar, and thus intuitive, to a wide range of people.

My first thought was a ‘genie’, but that’s not quite right. Yes, a genie is slave to the master of the lamp (1), and yes we’re all familiar with it (3), but it also has its own agenda (to trick the master into setting it free). That will to escape would always mix up our public conversation between ‘weak’ and ‘strong’ AI.

My other thought was a ‘familiar’, which fits the concept of ‘weak AI’ closely. In Western folklore, a familiar (or familiar spirit) is a creature, often a small animal like a cat or a rat, that serves the commands of a witch or wizard (1) and doesn’t have much in the way of its own plans (2). But I doubt enough people are familiar (ba-dum tss) with the idea for it to be of much use in public policy debates—except, perhaps, among Harry Potter fans and other readers of fantasy fiction.


We Can Start Here

I only know that we need this conceptual language. During last month’s stock market collapse, billions of dollars were lost by trading bots that joined in the sell-off. Is anyone to blame? If so, who? Or who will be to blame when—as will eventually happen—a Tesla on autopilot runs over a child? The owner of the algorithm? Its creator? The government, for letting the autopilot drive on the road?

Every week, artificial intelligent agents are generating more and more headlines. Our current laws, policy-making, ethics and intuitions are failing to keep pace.

With new language, we can begin to catch up.

Map #21: Hard Choices Or False Choices In Digital Paradise

Map #21: Hard Choices Or False Choices In Digital Paradise

Every week, we’re greeted with a new story about a cyber hack or attack. The 2018 Winter Olympic Games website was hacked during the opening ceremony last Friday night. Last week, it was confirmed that Russian operatives had hacked voter registration databases in multiple US states prior to the 2016 presidential election. Over the last month, a total of several billion dollars’ worth of crypto-currencies have been stolen in multiple cyber-bank heists. The biggest hack in the last year was of Equifax, the US consumer credit scoring company, whose 145.5 million records were stolen—including people’s names, social insurance numbers, drivers licenses, dates of birth and addresses. Globally, almost one billion Internet users were affected by a malware or virus in 2017.

All of which begs the question: How wise is it for us to build a ‘smart’ society—one that increasingly relies upon the digital medium for everything from filing taxes to driving cars?

I was in Estonia last week, escorting a delegation of government ministers from the Persian Gulf state of Oman, to help them find answers. Estonians were forced to ask this question sooner than most of us. Estonia is a small Baltic state of 1.3 million people. It’s a member of NATO. It’s one of the most digitized societies in the world. And it shares a border with Russia. In 2007, the Russian government hit Estonia’s digital infrastructure with a cyber-assault that temporarily shut down the country’s parliament, banks, ministries, newspapers and broadcasters.

Up until that attack, Estonia’s leaders, especially in government, were concentrated on building a digital paradise. And they’ve been succeeding. For most Estonians, calculating and filing one’s personal income taxes each year takes less than two minutes (and this year can be done with a few finger taps on the Tax Office’s Apple Watch app). Ambulance medics can know your medical history and medications before they arrive at the scene of your accident. Firefighters can know how many people are in your burning building (and if you have mobility problems) before they arrive at the scene of an alarm. Students can send their transcripts to a university with a single tap, and it takes less time to open a bank account or register a company than anywhere else in the world.

The government estimates that it has eliminated one full week—per year, per citizen—of time spent accessing government services: filling forms, standing in lines, filing taxes. The increased productivity across the whole economy is enough to fund the country’s entire national defense budget. Other benefits are harder to quantify. While Americans debate whether to add more polling stations or keep polls open later on Election Day, Estonians can vote online anytime (for a week until the polls close) from any device—anywhere in the world.

Paradise Lost?

But the 2007 cyber attack forced Estonia’s leadership to admit that it had been too blasé about securing their digital way of life up to that point.

Safety. Security. Privacy. Sharing. Trust. The digital medium puts all these values in new tension with each other. And those tensions need to be resolved.

Take privacy and sharing. Amidst so many cybercrimes, data privacy has become a public concern. We’re learning not to trust governments and corporations with our data. Perhaps instead of following the Estonian model, we should insist that government only use our personal data for the explicit purpose for which it was collected—and destroy it afterwards.

Understood this way, data privacy stands in opposition to data sharing. Data sharing is the practice of exchanging and aggregating our personal data, for the sake of efficiency or, in this age of algorithms, to discover important patterns to help us do things better.

We can either share our data to make society ‘smarter’. Or we can preserve everyone’s individual privacy.


Dial Back? Or Double Down?

Estonia is trying hard to expose this choice to be a false one. Faced in 2007 with the question of reversing course or charging ahead with its digital agenda, the country’s leadership clarified a core belief: the digital medium is here to stay. A society can no more turn away from the digital medium than Europe could turn away from the print medium 500 years ago.

If that’s right, then the only way out of these new tensions is through. The Estonian argument I heard last week is that data privacy and data sharing are compatible, once the latter is properly understood. Part of our misunderstanding stems from the use of the word ‘sharing.’ This is a misnomer. It suggests: I give you my data, and you give me yours. Estonians do not ‘share data’ in this vague way. Instead, they break ‘sharing’ into two precise ideas: data ownership and data contracts.

For example: One of the most commonly used public databases in Estonia is the population registry: a database that contains every resident’s vital statistics (name, date of birth, gender, etc) and address. Such basic data is useful to almost every public- and private-sector organization, in almost any transaction. But it has only one, legally liable owner: the Ministry of Statistics. Before any other organization can access the data (say, the police or a bank), they must negotiate a contract with the Ministry that specifies their data privileges and responsibilities. Typically, such contracts are for the minimum data needed to satisfy a valid query. The Ministry of Statistics won’t reveal a resident’s full address when a Yes/No answer—‘Is this person a resident, Y/N?’—will suffice.

Every transaction involving my personal data is recorded (Which entity requested what data for what purpose?), and I can access a log of all those transactions online at any time. This transparency helps me to trust that my data isn’t being misused.

It was remarkable to see for myself the daily conveniences that Estonia has built atop this trust foundation. Perhaps most remarkably, all this trusted data exchange has actually increased data privacy for the average resident. As a Canadian in the UK, I’d need to show my passport to a letting agency to rent an apartment—which gives the letting agency far more information about me than they have any business knowing—or storing on their insecure office machines. In Estonia, all the letting agency needs to know is what their digital query to the Immigration Office tells them: Is this person an eligible resident, Y/N?

Estonia is also trying to find the way through hard choices on cyber security. No digital network is 100% secure, is its post-2007 security ethos. Everything is hackable. Therefore, securing a digital society must be about resilience (be the hard target, so that hackers target someone easier) and recoverability (when you get knocked down, how quickly can you get back up?).

Estonia demonstrated its resilience during the May 2017 WannaCry ransomware attack by North Korea, which crippled more than 200,000 computers across 150 countries—but did not affect a single machine in Estonia. And it is demonstrating its commitment to recoverability this month, as it formally opens the world’s first ‘data embassy’ in Luxembourg. (Its data embassies will backup all essential public data—and will be able to take over running public data services if the country’s own servers fall to cyberattack again.)


Hard Choices? Or False Choices?

My week of conversations in Estonia left me with two dominant impressions. The first is that hard choices under an old paradigm can become false choices under a new one. As the news, good and bad, of our digital capabilities and vulnerabilities continue to crowd the headlines, will we have the vision, and the wisdom, to make that distinction?

The other impression is that when it comes to the digital medium, the greatest risk may be to linger half-way between the analog and digital way. Judging by all our daily habits, we are all quite happy to reap the benefits of the digital medium. Are we prepared to adapt to the responsibilities as well?

Remember John Podesta, the 2016 campaign manager for Hillary Clinton whose emails were hacked and posted to Wikileaks? His Gmail password was runner123