Map #34: The American Dream—Or Fantasyland?

Map #34: The American Dream—Or Fantasyland?

After my last letter (which took a deep dive into my friends’ research on the “power of doubt”) and with Donald Trump here in Europe for meetings with NATO, the Queen of England and Vladimir Putin all in the same week, now felt like the ideal moment to read and reflect on Fantasyland: How America Went Haywire, by Kurt Anderson.

When it comes to the state of the world nowadays, it seems we can’t say anything without the aid of one or more phrases that, until recently, we’d never used before: Fake News. Post-Truth. Alternative Facts. Check Google Trends for yourself: these terms barely existed in our lexicon prior to Oct/Nov 2016. The newness of these patterns of speech enhances the impression that something very new (and, depending on your politics, very disturbing) is happening to our political discourse.

But Kurt’s research (which he began back in 2013, long before the political earthquakes of 2016) argues that the present-day collapse of objective truth isn’t new at all. In hindsight, we’ve been eroding its foundations for 500 years.

By “we,” Kurt means Americans, but I think the U.S. case offers a cautionary tale to every democracy that traces its roots back to the Enlightenment.

It certainly got me thinking.

Warmly,

Chris


​A Recipe For The American Dream

To summarize crudely: European settlers to the New World brought with them two sets of beliefs that, when mixed together, produced the early American outlook on life: Enlightenment beliefs about the world, and Puritan beliefs about the self.

Free-Thinking

These settlers had been born into a Europe that was rapidly breaking free from its medieval mindset. In spectacular fashion, the voyages of Columbus, the mathematics of Copernicus and the astronomy of Galileo had all proved that neither the Bible nor the ancient Greeks had a monopoly on truths about the world. Present-day reason and observation could add to—even overturn—the oldest facts about the world.

Self-Made

They had also been born into a Europe that was violently tearing itself into Catholic and Protestant halves. Martin Luther, who began his career as a devout monk, ultimately denounced his Catholic Church for standing in the way between God and Christians. The Catholic Church’s arrogance was to believe that each tier in its hierarchy stood one level closer to God. Ordinary Christians, if they ever hoped to reach God, needed that hierarchy to give them a boost.

Said Luther: Bullshit (or words to that effect). Every baptized Christian had a direct and personal relationship with God, and everything they needed to do to grow that relationship, they could read for themselves in the Bible. Salvation lay not in the grace granted by a priest, but in their own diligent, upright behavior.

Question: What do you get when you combine Enlightenment beliefs about the world (“Facts are findable”) with Puritan beliefs about the self (“I am whatever I work hard to become”)?

Answer: The American Dream (“I can change my reality.”)


​Too Much Of A Good Thing

Taken in moderation, Kurt argues, America’s two settler philosophies powerfully reinforce each other. The free-thinker, who believes that the truth is out there just waiting to be discovered, gives the self-made Puritan something tangible to strive toward: Better knowledge. Deeper understanding. A practical pathway toward that utopian “shining city on a hill.”

Of course, Americans rarely do anything in moderation.

Some 2,400 years ago, Aristotle observed that every virtue is a balancing act. Courage is a virtue. But too much courage becomes a vice, recklessness. Confidence is a virtue. Too much confidence, and we begin to suffer from hubris (a recurring theme in my recent letters, here and here). Honor can slip into the zone of pride and vanity. Generosity can slip into wastefulness.

It’s one of the big, recurring themes in human myth and morality. Daedalus warned his son Icarus to fly the middle course, between the sea’s spray and the sun’s heat. Icarus did not heed his father; he flew up and up until the sun melted the wax off his wings. (He fell into the sea and drowned.) The Buddha taught people to walk the Middle Way—the path between the extremes of religious withdrawal and worldly self-indulgence. And so on.

What would happen if America’s founding philosophies were pushed to their extremes, with no regard for the balance between virtue and vice? To summarize Kurt’s thinking, in my own words:

The same outlook on the external world that entitles us to think for ourselves can lead us to a place of smug ignorance. How far a journey is it from the belief that facts can be found to the belief that facts are relative? (I found my facts; you are free to find yours.) How far a journey is it from the healthy skeptic who searches for the truth “out there,” to the suspicious conspiracy theorist who maintains that the truth is still out there? (i.e., Everything we’ve been told up until now is a cover-up, and no amount of evidence is going to shake my suspicion that the real truth is being hidden from us.)

Likewise, the inward-looking conviction that we are self-made can, if pushed to extremes, leave us self-deluded. If we follow our belief that “I am whatever I work hard to become” far enough, might we arrive at the conviction that “I am whatever I believe I am”? Might we arrive at the conviction that self-belief is stronger than self-truth?

Unbalanced and unrestrained, the same two philosophies that produced self-made free-thinkers in pursuit of the American Dream (“I can change my reality”) can also spawn a Fantasyland of self-deluded ignoramuses, for whom reality is whatever they believe hard enough.


​Keeping It Real

That destination isn’t inevitable. Icarus didn’t have to fly so close to the sun. Enlightenment and Puritan traditions both contained wisdom meant to temper the power of the individual to change things with the ‘serenity’ to accept what he/she could not change. (Most of us have heard the Serenity Prayer, written by the American evangelical, Reinhold Niebuhr (1892-1971), during the Great Depression.)

What separates freedom of thought from smug ignorance? The answer, for Enlightenment thinkers, were the virtues of humility, self-doubt and reasonableness. (If the privileged Truths of the past—a flat Earth, a Flood, a cosmos with humanity at the center—had to accept demotion in the face of new ideas and fresh evidence, what right did any of us have to insulate our own favorite Truths from the same tests?)

And what separates the self-made from the self-deluded? The answer, for early Puritans, were the virtues of hard work, discipline and literacy. (Martin Luther told Christians that they could have a personal relationship with God—‘DIY Christianity’, as Kurt calls it. But they still had to get their hands on a Bible, learn to read it, struggle to decipher it and strive to live by it. His was a harder, more demanding path than the Catholic way of collecting communion and giving to charity once every seven days.)

Unchecked By Reality

These are precisely the virtues that are being eroded in U.S. society, Kurt thinks. And not by the recent emergence of social media, or of cable news, or of unlimited corporate cash in politics. It’s been happening for 500 years.

I’m a poor student of American history. I wouldn’t know how to critique Kurt’s take on it. But he does weave a compelling tale of how, over the centuries, repeated doses of epic individualism and religious fervor have built up the population’s immunity to reality checks. The get-rich-quick gold rushes in Virginia and California. The Salem witch trials. The spectacle (speaking in tongues, channeling the Holy Spirit, seizures and shouting) of America’s early evangelical churches. The wholesale fabrication of new religions, from the Book of Mormon to the Church of Christ, Scientist. Pseudoscientific medical fads, from magnetic healing to alchemy to electric shock therapy to Indian Cough Cure. The Freemason conspiracy behind the American Civil War. P.T. Barnum, Buffalo Bill, Harry Houdini. By the year 1900, reality’s power to impose itself upon the American self or world had been significantly eroded. Or, as Kurt put it:

If some imaginary proposition is exciting, and nobody can prove it’s untrue, then it’s my right as an American to believe it’s true. (p.107)

America’s 20th century was a story of military, economic, social and scientific achievement, but it also saw the discovery of new solvents to dissolve the boundary between reality and fantasy more and more. Orson Welles’s science-fiction radio broadcast, War of the Worlds, caused a real-life panic in New York City. Modern advertising was born, to invent new desires and ways to satisfy them. Napoleon Hill published Think and Grow Rich. In the 1950s, Disneyland opened its doors (so that adults could relive their childhood) and modern Las Vegas was born (so that adults could escape the responsibilities they had accumulated). In the same year, the first James Bond novel and the first Playboy magazine were published. The L.A. science fiction author, L. Ron Hubbard, started up the Church of Scientology. McCarthyism, which accused everyone from leftist filmmakers to President Harry Truman of conspiring with Moscow. The Kennedy assassination, the truth of which is still out there!

Humility and reasonableness; discipline and hard work. These virtues continued to take a beating. In the 1950s, Reverend Norman Vincent Peale (who mentored, among others, Donald Trump) published The Power of Positive Thinking, which Kurt describes as “breezy self-help motivational cheerleading mixed with supernatural encouragement.” It stayed on the New York Times bestseller list for 150 weeks.

Woodstock. Hippies. Beatniks. The New Age thinking of the 1970s: ‘What we see in our lives is the physical picture of what we have been thinking, feeling and believing.’ In other words, we create our own reality.

‘You are you, and I am I. If by chance we find each other, it’s beautiful. If not, it can’t be helped.’ (Fritz Perls, founder of the Esalon Institute) 

ESP. Parapsychology. Science fiction fan conventions and Burning Man festivals. Pro wrestling. Reality television.

The point is this: Yes, new technologies—the internet, mobile, social media—have eliminated the gate-keepers who used to limit the privilege to speak publicly. But how eagerly people listen, and whether people believe what they hear, is a function of audience culture.


​Reality Will Triumph In The End…Won’t It?

What stands between belief in the American Dream and belief in a Fantasyland—once the virtues that acted as a barrier between the two have been dissolved?

One answer, of course, is reality. In Age of Discovery, I (rather airily) advised:

‘We are all going to ignore many truths in our lifetime. Try not to. Ultimately, reality is hard to reject. Healthy, successful people—and societies—are those that build upon it.’

Now that I re-think about it, that argument is too simplistic. Much of human civilization up to now has passed in self-deluded ignorance. Our cave-dwelling ancestors believed that by drawing a picture of their buffalo hunt, they caused the kill to happen. By the standards of modern science, that’s ridiculous. But it worked well enough (perhaps as a motivational tool?) to get our species to the here and now. History shows that humanity can be irrational, can be unreasonable, can be deluded into magical thinking, and still survive. Even thrive.

Reality doesn’t always assert itself in human timescales. We can ignore reality, and get away with it. For a long, long time.

Given the sheer scale of humanity today, we’ve now reached that reckoning point. Mass delusion is a luxury we can no longer afford. Unfortunately, it’s also a hard habit for us to break.


​No Easy Answers. But Clear Questions

Post-truth. Alternative facts. We’ve introduced these new phrases into our language to talk about our suddenly new condition. Only it’s not suddenly new. It’s just suddenly obvious.

How do we navigate society out of this fantasyland and back to a shared reality? The answer is not simply “social media regulation” or “more technology.” These knee-jerk prescriptions might play a part, but they lack the leverage to stop 500 years of momentum in the other direction.

The answer is not simply to “get involved in politics,” either. Politics is a game of public speech that we play within an audience culture. Victory, as we’ve seen, often goes to those who understand what their audience likes to hear, not to those who ask their audience to hear differently.

Nor is the answer to teach our children critical thinking skills in school (although, again, that might play a part). Lies dressed up as facts in the media is just one symptom, one corner, of the wider culture they’re growing up in—a culture in which Kylie Jenner (of the Kardashian clan) is the world’s youngest-ever billionaire and Fifty Shades of Grey is the most-read novel in America.

Kurt, to his credit, offers no easy solutions at the end of his Fantasyland:

What is to be done? I don’t have an actionable agenda. I don’t have Seven Ways Sensible People Can Save America from the Craziness. 

But if his map of the territory is correct, then the only genuine answer is to shift culture. To rebuild the virtues of discipline and hard work, of humility and reasonableness, as a community-owned dam between reality and fantasy. To retreat from self-deluded ignorance back to the healthier zone of self-made free-thinkers.

That sounds nice—and kinda impossible. “Culture” is a fuzzy, emergent property of a population. It’s real, but we can’t touch it.

We can, however, see it—at least, we’re starting to. And, like every other aspect of our social reality (e.g., gender bias, media bias), the clearer we see it, the more it falls under our conscious influence. That’s how cultural revolutions—good and bad, #metoo and #fakenews—happen.

And so, even if there are no obvious exits from fantasyland, the obvious next step is to draw for ourselves clear maps of our current cultural landscape, so that we can hold them up to conscious scrutiny:

1. Who are our heros? Who are our villains? What makes them so?

2. Who are our “priests,” our “thought leaders,” our “social influencers”? What gives them their authority among us?

3. What are our myths—the shared stories that we celebrate and share to exemplify how to be and behave? What virtues do those myths inspire in us? What vices do those virtues entail, if left unchecked?

The more bravely we explore these questions, the better pathfinders we’ll be.

Map #33: The Power Of Doubt

Map #33: The Power Of Doubt

Last week, I helped open OXSCIE 2018, a global summit on the future of education at Oxford University. It was a special event, focussed mainly on challenges in the developing world. Delegates were a balance of education policy makers, academic researchers, donors and teachers, and each of those four groups were represented by the world’s best and most influential. Two personal highlights for me were meeting Andrea Zafirakou, the high school art teacher who won the $1 million ’Best Teacher in the World’ prize for 2018; and Lewis Mizen, a 17-year-old survivor of the Marjory Stoneman Douglas High School shooting in Florida this February (he’s now a political activist). 

The official theme of the summit was “Uncertainty, Society and Education.” That must rank among the broadest conference themes in the history of conferences. 

(As an aside, the list of “broad conference themes” is a crowded field. Did you know?: The single most common conference theme in the world is “Past, Present and Future”? e.g., Astrochemistry: Past, Present and Future; Libraries: Past, Present and Future; even Making Milk: Past, Present and Future…which, as an even further aside, is a much wider and deeper topic than I first imagined! Here’s that conference’s output.) 

Brave voyages indeed,

Chris

 

Too Much V.U.C.A.

Back to Oxford. The organizers (Aga Khan Foundation, Global Centre for Pluralism and Oxford) asked me to set the stage for their two-day event by talking for 10 minutes around the question, “What is uncertainty?” 

(Admittedly I went slightly over time. Well, way over time.)

Because I love this question. I love it, because it challenges us all to stop and think about a word—uncertainty—that gets thrown around more and more everyday. 

At virtually every leadership-ish event I go to these days, people talk about the V.U.C.A. world. Volatile, Uncertain, Complex, Ambiguous. The acronym originated in the US Army War College in the early 1990s (see the original, much-photocopied essay), where, in the aftermath of the Cold War, military officers and students struggled to make sense of the new strategic environment. It has entered into popular language today in executive forums because it seems to capture the mood that many people—especially people burdened with decision-making responsibilities—feel.

In my slideshow, I threw up a couple random stock illustrations of the V.U.C.A. acronym that I had pulled off the Internet. (Another aside: The availability of stock art to illustrate how to make sense of the world is, I think, a big red flag that we need to examine these sense-makings more closely before we adopt them ourselves!) 

If you run this Google Image search yourself, you’ll come across some stock definitions that are laughably bad. e.g.,

Uncertainty: When the environment requires you to take action without certainty.

Others hit closer to the mark:

Uncertainty speaks to our very human inability to predict future outcomes. In uncertain environments, people can be hesitant to act because of their inability to be sure of the outcome, leading to inaction and missed opportunities.

This definition, I think, captures the concepts that decision-makers have in mind when they talk about uncertainty today: unpredictability, hesitation, inaction, missed opportunities. The broad notion is that uncertainty is a bad thing, a negative, because ideally what we’d possess whenever we make decisions is certainty:

Is that the right action to take? Yeah, I’m sure. Are you certain? I’m certain. Then what are you waiting for? Go and do it.

 

Do you love that person? Yeah. Are you certain? I’m certain. Then marry them.

Certainty is our preferred foundation for the biggest, most important decisions that we make in life.

If that’s right—and if uncertainty is the absence of that kind of decision-making ability—then we’re in trouble. We’re going to be hesitant and miss out on a lot of opportunities in our lifetime—because the present world is full of uncertainty. 

So Much We Don’t Know

Consider a few of the biggest and most obvious causes of uncertainty about humanity’s present condition:

Urban Unknowns

Take urbanization. The world boasted two mega-cities of more than 10 million people in 1950—New York and Tokyo. Today there are 40, two-thirds of which are in the developing world. Humanity’s urban population has quadrupled in the past 75 years, and that quadrupling has put everyone in closer proximity to everyone else. Now, 95% of humanity occupies just 10% of the land. Half of humanity lives within one hour of a major city—which is great for business, and also great for the spread of disease. Despite all our medical advances, the 2017/18 winter was the worst flu season in a decade in North America and Europe. Why? Because it was the worst flu season in a decade in the Australian winter, six months prior. Thanks to the global boom in livestock trade and tourism, we now live in a world where, whenever Australia sneezes, the rest of us might get sick. And vice versa.

Environmental Iffiness 

Or take environmental change. In 1900, there were 1.6 billion humans on the planet, and we could ignore questions of humankind’s relationship with the biosphere. Now we are 7.4 billion humans, and it seems we can no longer do so—an inconvenient truth which South Africa re-learned this spring when Cape Town’s water reservoirs dried up, and which the UK re-learns every year during its now-annual spring flash floods.

Maybe renewable energy technologies will solve our biggest climate problems. In just the past twenty years, renewable power generation has grown nine-fold, from 50 MToE (million tonnes of oil equivalent) in 1998 to 450 MToE today. 

Or maybe not. Looking twenty years into the future: on current trends, by 2040 renewables will still only make up 17% of the global energy supply, up from 12% today. The world keeps growing, and growth is energy intensive. 

Demographic Doubts

Take demographics. At the beginnings of life: Fertility is falling worldwide. Birth rates are already at, or below, replacement levels pretty much everywhere except in Africa. At the end of life: Life expectancy is rising worldwide. In 1950, the average human died at age 50; today, at age 70. (That latter statistic is maybe the most powerful, least debatable, evidence of “progress” that human civilization might ever compose.)

The combination of both those trends means that humanity is aging rapidly, too. In 1975, the average Earthling was 20 years old; today, the average Earthling is 30 years old. That extra decade has mammoth consequences for every aspect of society, because virtually every human want and need varies with age. 

One of the easiest places to see this impact is in the public finances of rich countries. In the US, to deliver current levels of public services (everything from education to health care to pensions) to the projected population in 2030, taxpayers will need to find an additional US$940 billion. In the UK, they’ll need to find another US$170 billion, and in Canada they’ll need to find another US$90 billion. Why? Fewer taxpayers, more recipients. How will we fill this funding gap?

Economic and Political Insecurities

Meanwhile, a tectonic shift in global economics is underway. By 2050, China’s economy will be twice the weight of the US’s in the world. India’s will be larger, too. What changes will that bring? 

Geopolitics is going through a redesign. The post-WWII liberal, rules-based international order is anchored in a handful of multilateral institutions: the UN, the WTO, the IMF, the World Bank, the EU, the G7. In the very countries that built this order, more and more people are asking, “Is this the order that can best deliver me peace and prosperity in the future?” Those who loudly shout “No!” are winning elections and shifting the tone and content of democratic discourse. In the US and UK, trust in government has fallen to about 40%. Q: Which country’s public institutions boast the highest trust rating in 2018? A: China (74%). All of which begs the question: Whose ideas will govern the future? 

Analysis Paralysis…?

In short, humanity is sailing headlong into trends that will transform us all in radical, unpredictable ways—and that’s before we even begin to consider the “exponential technologies” that occupied my last two letters.

We do live in a moment of big unknowns about the near future—bigger, maybe, than at any other time in human history. This could be the proverbial worst of times, or best of times—and both seem possible. 

So if “uncertainty” is what we generally take it to be (i.e., an unwanted ignorance that makes us hesitate), then our world must be full of indecision and inaction and missed opportunities right now.

I’m More Worried About Certainty 

Here’s where I call bullshit on this whole conception of uncertainty. It just doesn’t feel right.

And feeling is the missing dimension. To define “uncertainty” as our degree of not-knowing about the world around us is, I think, only half-right. Half of uncertainty is “What do we know?”, out there. But the other half of uncertainty is “How do we feel about that?”, in here.

Once we understand that uncertainty exists in this two-dimensional space, I think we better appreciate its positive role in our thinking and acting. And I think we discover that the danger zone isn’t uncertainty. It’s certainty.

Four Sins Of Certainty 

Take the fearless know-it-all. “I understand my domain better than anyone else. That’s how I’ve gotten to where I am today, and that’s why I can make bold moves that no one else is willing to.” We call that myopia.

What about the anxious expert? They’ve done everything possible to prepare themselves for the job in front of them. That’s why, if they fail, they know the depth of their failure better than anyone. That’s what we call angst.

How is it possible to be certain AND not know? By being certain that you don’t know—and that you need to know. You need to act, and you know that the consequences of acting wrong can break you. You see all the variables, but no possible solution. So you’re stuck. We call that paralysis.

To illustrate each of these three ‘sins’, I mapped them to the recent behaviors of three prominent political leaders. I tried really hard to think of a world leader who would fit into the fourth quadrant…but so far I’ve drawn a blank. Any ideas? It would have to be somebody who (a) doesn’t know anything, but (b) doesn’t care because knowledge is for losers. What matters is willpower. 

We call that hubris. 

Four Virtues Of Uncertainty 

The healthy role for uncertainty, it seems, is to protect us against these extremes:

Uncertainty can challenge myopia, by confronting the know-it-all with a diversity of views. 

Uncertainty can validate the angst-ridden agent. Through peer mentoring and support, we are reminded that, even if our mistakes have been great, and final, those losses do not say everything about us. Validating voices force us to recognize the positives that we’ve omitted from our self-knowledge.

Uncertainty can replace our paralysis with a focus on learning. True, some critical factors may be unknowable, and some past actions may be unchangeable. But not all. A good first step is to identify the things we can affect—and understand them better.

Uncertainty can prepare  us to weather the consequences of hubris. Maybe you are always right. Maybe you do have magic eyeglasses to see reality more clearly than everyone else. But just in case, let’s do some risk management and figure out what we’ll do if it turns out that you aren’t, or don’t.

 

The Power Of Doubt 

A few letters ago, I mused about the danger of too much truth in democratic societies:

When too many people believe they have found truth, democracy breaks down. Once truth has been found, the common project of discovery is complete. There is no more sense in sharing power with those who don’t realize it…To rescue the possibility of groping toward Paradise democratically, we need to inject our own group discourses with doubt. 

Now I’m beginning to understand that the power of doubt extends far beyond the political realm. It’s an important ingredient in all domains of leadership—from figuring out the future of education, to making decisions in our professional lives, to maintaining our mental health. (A couple of my friends at Oxford Saïd Business School, Tim Morris and Michael Smets, did a major study among CEOs on this topic.) 

Can we get comfortable with being uncomfortable? That, I think, is one of the critical skills we all need to develop right now. 

Map #32: Homo Hubris? (Part 2 of 2)

Map #32: Homo Hubris? (Part 2 of 2)

Technology drives change. Does it also drive progress? 

Those eight words sum up a lot of the conversation going on in society at the moment. Some serious head-scratching about the whole relationship between “technology” and “progress” seems like a good idea.

In Part 1, I summarized “four naïveties” that commonly slip into techno-optimistic views of the future. Such views gloss over: (1) how technology is erasing the low-skilled jobs that, in the past, have helped poor countries to develop (e.g. China); (2) how, in a global war for talent, poorer communities struggle to hold onto the tech skills they need; (3) how not just technology, but politics, decides whether technological change makes people better off; and (4) how every technology is not just a solution, but also a new set of problems that society must manage well in order to realize net gains.

Technology = Progress?

The deepest naïveté—the belief lurking in the background of all the above—is that technological change is a good thing.

This is one of the Biggest Ideas of our time—and also one of the least questioned…

It wasn’t always so obviously true. In 1945, J. Robert Oppenheimer, upon witnessing a nuclear explosion at the Manhattan Project’s New Mexico test site, marked the moment with a dystopian quote from the Bhagavad Gita: “I am become death, destroyer of worlds.” 

But within ten years, and despite the horrors of Hiroshima and Nagasaki, a far more utopian spin on the Atomic Age had emerged. Lewis Strauss, architect of the U.S. “Atoms for Peace” Program and one of the founding members of the Atomic Energy Commission, proclaimed in 1954 that: 

It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter, and will know of great periodic famines in the world only as matters of history. They will travel effortlessly over the seas and under them, and through the air with a minimum of danger and at great speeds. They will experience a life span far longer than ours as disease yields its secrets and man comes to understand what causes him to age. 

What happened in the years between those two statements to flip the script from techno-dystopia to techno-utopia?

Wartime state-sponsored innovation yielded not only the atomic bomb, but: better pesticides and antibiotics; advances in aviation and the invention of radar; plastics and synthetic fibers; fertilizers and new plant varieties; and of course, nuclear energy.

Out of these achievements, a powerful idea took hold, in countries around the world: science and technology meant progress. 

In the U.S., that idea became official government dogma almost immediately after the war. In a famous report,Science: The Endless Frontier, Vannevar Bush (chief presidential science advisor during WWII, leader of the country’s wartime R&D effort and founder of the U.S. arms manufacturer Raytheon) made the case to the White House that (a) the same public funding of sciences that had helped win the war would, if sustained during peace-time, lift society to dizzying new heights of health, prosperity and employment. It also warned that (b) “without scientific progress, no amount of achievement in other directions can insure our health, prosperity and security as a nation in the modern world.” But Vannevar also framed the public funding of scientific and technological research as a moral imperative:

It has been basic United States policy that Government should foster the opening of new frontiers. It opened the seas to clipper ships and furnished land for pioneers. Although these frontiers have more or less disappeared, the frontier of science remains. It is in keeping with the American tradition—one which has made the United States great—that new frontiers shall be made accessible for development by all American citizens.

Moreover, since health, well-being and security are proper concerns of Government, scientific progress is, and must be, of vital interest to Government. Without scientific progress the national health would deteriorate; without scientific progress we could not hope for improvement in our standard of living or for an increased number of jobs for our citizens; and without scientific progress we could not have maintained our liberties against tyranny.

In short, science and technology = progress (and if you don’t think that, there’s something unpatriotic—and morally wrong—about your thinking).

The High Priests of Science & Technology Have Made Believers Of Us All 

In every decade since, many of the most celebrated, most influential voices in popular culture have been those who repeated and renewed this basic article of faith—in the language of the latest scientific discovery or technological marvel. E.g., 

1960s: John F. Kennedy’s moonshot for space exploration; Gordon Moore’s Law of exponential growth in computing power; the 1964-65 New York World’s Fair (which featured future-oriented exhibits like Bell Telephone’s PicturePhone and General Motors’ Futurama)

1970s: Alvin Toffler’s Future Shock, which argued that technology was now the primary driver of history; Carl Sagan, who argued that scientific discovery (specifically, in astronomy) reveals to us the most important truths of the human condition; Buckminster Fuller, who argued that breakthroughs in chemistry, engineering and manufacturing would ensure humanity’s survival on “Spaceship Earth” 

We can make all of humanity successful through science’s world-engulfing industrial evolution. 

– Buckminster Fuller, Operating Manual for Spaceship Earth (1968)

1980s: Steve Jobs, who popularized the personal computer (the Mac) as a tool for self-empowerment, self-expression and self-liberation (hence, Apple’s iconic ”1984” TV advertisement); Erik Drexler, the MIT engineer whose 1986 book Engines of Creation: The Coming Era of Nanotechnology, imagined a future free from want because we’ll be able to assemble anything and everything we need, atom-by-atom; Hans Moravec, an early AI researcher whose 1988 book, Mind Children, applied Moore’s Law to the emerging field of robotics and neuroscience and predicted that humanity would possess godlike powers of Creation-with-a-capital-C by 2040. Our robots would take our place as Earth’s most intelligent species.

1990s: Bill Gates, whose vision of “a computer on every desktop” equated improved access to Microsoft software with improvements in human well-being; Ray Kurzweil, another AI pioneer, who argued in Age of Intelligent Machines (1990), Age of Spiritual Machines (1999) and The Singularity is Near (2005) that the essence of what makes us human is to reach beyond our limits. It is therefore inevitable that science and technology will eventually accomplish the next step in human evolution: the transhuman. By merging the “wetware” of human consciousness with computer hardware and software, we will transcend the biological limits of brainpower and lifespan. 

2000s: Sergey Brin and Larry Page, who convinced us that by organizing the world’s information, Google could help humanity break through the barrier of ignorance that stands between us and the benefits that knowledge can bring; Steve Jobs (again), who popularized the smartphone as a tool of self-empowerment, self-expression and self-liberation (again), by making it possible for everyone to digitize everything we see, say, hear and touch when we’re not at our desks.

2010s: Mark Zuckerberg, who, in his Facebook manifesto, positions his company’s social networking technology as necessary for human progress to continue: 

Our greatest opportunities are now global—like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses—like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community…Facebook develops the social infrastructure to give people the power to build a global community that works for all of us. 

(Facebook, apparently, is the technology that will redeem us all from our moral failure to widen our ‘circle of compassion’ [as Albert Einstein called it] toward one another.)

Elon Musk likewise frames his SpaceX ‘Mars-shot’ as necessary. How else will humanity ever escape the limits of Spaceship Earth? (Seventy-five years after Vannevar’s Endless Frontiers report, we now take for granted that “escaping” such “limits” is the proper goal of science—and by extension, of society.)

And last (for now, at least), Yuval Harari, whose latest book, Homo Deus: A Brief History of Tomorrow, says it all in the title.

Science and technology is the engine of human progress. That idea has become so obviously true to modern minds that we no longer recognize it for what it really is: modernity’s single most debatable premise. 

Rather than debate this premise—a debate which itself offers dizzying possibilities of progress, in multiple dimensions, by multiple actors—we quite often take it as gospel. 

Rather than debate this premise, Yuval instead takes it to its ultimate conclusion, and speaks loudly the question that the whole line of High Priests before him quietly whispered: Do our powers of science and technology make us gods? 

It is the same question that Oppenheimer voiced in 1945, only now it’s been purified of all fear and doubt.

We Can Make Heaven On Earth

“Utopia,” which Thomas More coined in his book by the same name in 1516, literally means “no place.” In the centuries since, many prophets of this or that persuasion have painted utopian visions. But what makes present-day visions of techno-utopia different is the path for how we get there. 

In the past, the path to Utopia called for an impossible leap in human moral behavior. Suddenly, we’ll all follow the Golden Rule, and do unto others as we would have done unto us. Yeah, right. 

But today’s path to techno-Utopia calls for a leap in science and technology—in cybernetics, in artificial intelligence, in biotechnology, in genetic manipulation, in molecular manufacturing. And that does seem possible…doesn’t it? Put it this way: Given how far our technology has come since the cracking of the atom, who among us is willing to say that these breakthroughs are impossible?

And if they are not impossible, then Utopia is attainable. Don’t we then have a duty—a moral duty—to strive for it?

This argument is so persuasive today because we have been persuading ourselves of it for so long. Persuasive—and pervasive. It is the basic moral case being made by a swelling number of tech-driven save-the-world projects, the starkest example of which is Singularity University. 

I find it so compelling, that I don’t quite know what to write in rebuttal…

Gods—Or Slaves?

Until I recall some the wisdom Hannah Arendt, or Zygmunt Bauman, or remember my earlier conversation with Ian, and remind myself that technology never yields progress by itself. Technology cannot fix our moral and social failings, because those same failings are embedded within our technologies. They spread with our technologies. Our newest technology, A.I. (which learns our past behaviors in order to repeat them), is also the plainest proof of this basic truth. More technology will never be the silver-bullet solution to the problems that technology has helped create. 

And so we urgently need to delve into this deepest naïveté of our modern mindset, this belief that technological change is a good thing.

How might we corrupt our techno-innocence? 

One thing that should leap out from my brief history of the techno-optimistic narrative is that most of the narrators have been men. I don’t have a good enough grasp of gender issues to do more than point out this fact, but that right there should prompt some deep conversations. Question: Which values are embedded in, and which values are excluded from, tech-driven visions of human progress? (E.g., Is artificial enhancement an expression of humanity’s natural striving-against-limits, or a negation of human nature?)

As a political scientist, I can’t help but ask the question: Whose interests are served and whose are dismissed when technology is given pride of place as the primary engine of our common future? Obviously, tech entrepreneurs and investors do well: Blessed are the tech innovators, for they are the agents of human progress. At the same time: Accursed are the regulators, for they know not what they govern. 

Yuval slips into this kind of thinking in his Homo Deus, when he writes:

Precisely because technology is now moving so fast, and parliaments and dictators alike are overwhelmed by data they cannot process quickly enough, present-day politicians are thinking on a far smaller scale than their predecessors a century ago. Consequently, in the early twenty-first century politics is bereft of grand visions. Government has become mere administration. It manages the country, but it no longer leads it.

But is it really the speed of technological change, is it the scale of data, that limits the vision of present-day politicians? Or is it the popular faith that any political vision must accommodate the priorities of technological innovators? For all its emerging threats to our democracy, social media must be enabled. For all its potential dangers, research into artificial intelligence must charge ahead. Wait, but—why? 

Why!?! What an ignorant question!

And while we’re on the topic of whose interests are being served/smothered, we should ask: whose science and technology is being advanced, and whose is being dismissed? “Science and technology” is not an autonomous force. It does not have its own momentum, or direction. We determine those things. 

The original social contract between science and society proposed by Vannevar Bush in 1945 saw universities and labs doing pure research for its own sake, guided by human curiosity and creativity. The private sector, guided by the profit motive, would then sift through that rich endeavor to find good ideas ready to be turned into useful tools for the rest of us. But the reality today is an ever closer cooperation between academia and business. Private profit is crowding out public curiosity. Research that promises big payoffs within today’s economic system usually takes precedence over research that might usher in tomorrow’s…

Homo Humilitas 

All predictions about the future reflect the values and norms of the present. 

So when Yuval drops a rhetorical question like, Will our powers of science and technology one day make us gods?, it’s time to ask ourselves tough questions about the value we place on technology today, and what other values we are willing to sacrifice on its altar. 

The irony is that, just by asking ourselves his question—by elevating science and technology above other engines of progress, above other values—we diminish what humanity is and narrow humanity’s future to a subset of what might be.

It is as if we’ve traded in the really big questions that define and drive progress—“What is human life?” and “What should human life be?”—for the bystander-ish “What does technology have in store for our future?”

That’s why I suspect that the more we debate the relationship between technology and progress, the more actual progress we will end up making. 

I think we will remind ourselves of the other big engines of progress at society’s disposal, like “law” and “culture” and “religion,” which are no less but no more value-laden than “technology.”

I think we will remind ourselves of other values, some of which might easily take steps backward as technology “progresses”. E.g., As our powers to enhance the human body with technology grow stronger, will our fragile, but fundamental, belief in the intrinsic dignity of every human person weaken? 

I think we will become less timid and more confident about our capacity to navigate the now. Within the techno-utopian narrative, we may feel silenced by our own ignorance. Outside of that narrative, we may feel emboldened by our wisdom, our experience, our settled notions of right and wrong. 

I think we will recall, and draw strength from, examples of when society shaped technology, and not the other way around. In the last century, no technology enjoyed more hype than atomic energy. And yet just look at the diversity of ways in which different cultures incorporated it. In the US, where the nuclear conversation revolves around liability, no new nuclear plant has opened since the Three Mile Island accident of 1979. In Germany, where the conversation revolves around citizens’ rights to participate in public risk-taking, the decision was taken in 2011 to close down all 17 reactors in the country—in direct response to the Fukushima meltdown in Japan. Meanwhile in South Korea, whose capital Seoul is only 700 miles from Fukushima, popular support for the country’s 23 reactors remained strong. (For South Koreans, nuclear technology has been a symbol of the nation’s independence.) 

And I think we will develop more confidence to push back against monolithic techno-visions of “the good.” Wasn’t the whole idea of modernity supposed to be, as Nietzsche put it, “God is dead”—and therefore we are free to pursue a radical variety of “goods”? A variety that respects and reflects cultural differences, gender differences, ideological differences… Having done the hard work to kill one idea of perfection, why would we now all fall in line behind another? 

Four Little Questions To Reclaim The Future 

None of the above is to deny that technology is a profound part of our lives. It has been, since the first stone chisel. But we hold the stone in our hands. It does not hold us.

Or does it? After decades of techno-evangelism, we risk slipping into the belief that if we can do it, we should do it. 

Recent headlines (of cybercrime, social media manipulation, hacked public infrastructure and driverless car accidents) are shaking that naïveté. We understand, more and more, that we need to re-separate, and re-arrange, these two questions, in order to create some space for ethics and politics to return. What should we do? Here, morality and society must be heard. What can we do?  Here, science and technology should answer.

Preferably in that order. 

It’s hard to imagine that we’ll get there. But I think: the more we debate the relationship between technology and progress, the more easily we will find our rightful voice to demand of any techno-shaman who intends to alter society:

  1. What is your purpose? 
  2. Who will be hurt? 
  3. Who will benefit? 
  4. How will we know?

By asking these four simple questions, consistently and persistently, we can re-inject humility into our technological strivings. We can widen participation in setting technology’s direction. And we can recreate genuinely shared visions of the future. 

Map #32: Homo Hubris? (Part 1 of 2)

Map #32: Homo Hubris? (Part 1 of 2)

On the heels of my recent foray into A.I., I’ve been reading a bunch of recent books on our coming technological utopia: Yuval Harari’s Homo Deus, Peter Diamandis’s Abundance, Steven Pinker’s Enlightenment Now and Ray Kurzweil’s The Singularity Is Near. They’re quick reads, because they all say basically the same thing. Thanks to emerging ‘exponential technologies’ (i.e., Artificial Intelligence, Internet of Things, 3D printing, robotics and drones, augmented reality, synthetic biology and genomics, quantum computing, and new materials like graphene), we can and will build a new and better world, free from the physical, mental and environmental limits that today constrain human progress. 

It all seems so clear. So possible. 

To muddy the waters, I sat down for coffee with my pal and frequent co-author Ian Goldin, who throughout his whole career—from advising Nelson Mandela in his home country South Africa, to helping shape the global aid agenda as a senior exec at the World Bank—has labored to solve poverty. (His latest book, Development: A Very Short Introduction, is, as the title suggests, an excellent starting point on the subject.)

I wanted to explore the ‘hard case’ with Ian. And the hard case is poverty, in its many manifestations. Whether these ‘exponential technologies’ relieve me of the burden of leaving my London flat to buy shaving blades is one thing; whether they can help relieve the burden of poor sanitation in Brazilian favelas is another thing entirely. 

Question: will the sexy new technologies that scientists and techies are now hyping really give us the weapons we need to solve the hard problems that plague the world’s poor? 

Answer: Not unless we first address ‘four naïvetés’ in our thinking about the relationship between ‘technology’ and ‘development.’ 

Vanishing Jobs: No Ladder To Climb

The first naïveté concerns jobs. Automation is making the existing model for how poor people get richer obsolete. How did China grow its economy from insignificant in the 1970s, to the world’s largest today? That ascent began with low-cost labor. Foreign manufacturers moved their factories to China. The money they saved by paying lower wages (in the early 1990s, the average Chinese factory wage was US$250 per year) more than offset the increased cost of shipping their products all the way from China to their customers.

Today, average factory wages in China are quite a lot higher (the latest stat I’ve seen is US$10,000 per year). The countries that can boast loudest about their low-cost labor supply in 2018 are places like Burundi, Malawi and Mozambique. Unfortunately for them, fewer and fewer foreign manufacturers see low-cost labor as a winning strategy. Nowadays, in industries ranging from smartphones to automobiles, increasingly capable and affordable factory robots can crank out more, better, customized products than an assembly line staffed by humans ever could. In the rapidly arriving world of robot factories, it is not the cost of labor, but rather the cost of capital, that determines a factory’s profitability. And capital—whether in the form of a bank loan, a public offering of stock, or private equity investment—is much cheaper and easier to raise in the mature financial markets of New York than in New Delhi or Côte d’Ivoire. How will Africa ever repeat China’s economic climb, if the first and lowest rung on the development ladder—i.e., a low-cost labor advantage—has been sawed off by robots? 

Gravity’s Pull (and the Pooling of Scarce Skills)

The second naïveté concerns choice. It’s a safe assumption that births of big-brained people are evenly distributed across the whole of humanity. From Canada to Cameroon, a similar share of the population is born with the raw intellectual horsepower needed to understand and push the boundaries of today’s science and technology. And thanks to the internet, mobile data and digital knowledge platforms, whether in Canada or the Central African country of Cameroon, such big-brained people now have a better chance than at any other time in history to nurture that intelligence. Genius is flourishing. Globally.

But as it matures, genius tends to pool in just a few places. That’s because, while the odds of winning the intelligence lottery at birth might be distributed evenly everywhere, the opportunities to cash in that winning ticket are not. Those opportunities pool. Within countries, they pool in the cities and on the coastlines. Between countries, they pool in the fastest-growing and richest economies. If I am a talented data scientist in Cameroon, am I going to start up a business in my capital city of Yaoundé (maybe) or (more likely) get on a plane to Silicon Valley, where the LinkedIns and Facebooks of the world today dangle US$2 million starter salaries in front of people with skills like mine? (Right now, even top-tier US universities struggle to retain skilled staff when Silicon Valley comes recruiting. How on earth can Cameroon compete?)

If technology does drive progress, and if the skills needed to drive the technology are scarce, then progress will remain uneven—and poorer places will continue to lag behind. 

 

Politics Are Unavoidable—And Decisive 

The third naïveté (or maybe it’s 2(b)) concerns distribution. Every technology has strong distribution effects. It generates winners and losers. Some own it; others pay to use it. Some build it (and accumulate valuable equity); others buy it (and accumulate debt). Some talents are in high demand, and salaries soar; some talents are no longer required, and jobs are lost. 

That’s life. How society chooses to respond to these distribution effects is a political question, one that every community answers for itself (albeit with varying degrees of popular awareness and participation). Public institutions and laws passed by the political system (regarding, say: property rights, taxation, transparency and oversight) shape what happens after the gains and losses are…er…won and lost.

If the big topic we’re interested in is “progress”, then we need to take an interest in these political questions. Technologies never, ever yield progress by themselves. (For the clearest evidence, look no further than the United States. Since 1990, U.S. society has undergone astonishing an technological transformation: the advent of the Internet; the advent of mobile phones (and now, the mass adoption of mobile broadband data); the mapping of the human genome and advent of genetic medicine; the advent of autonomous vehicles; the advent of commercial space travel; the advent of e-commerce and social media; the advent of 3D printing and nanotechnology and working quantum computers; the advent of turmeric lattes. And yet, for all that, the average salary of people in the bottom 20% of US wage-earners is lower today, in real-dollar terms, than it was 28 years ago. Put another way, the bottom 20% of wage-earners are taking home less pay today than they did back when the overall US economy was only half its current size. If we’re talking about economic progress, it’s pretty clear that there’s been a lot for some, and less than none for others. 

All ‘Technology’ Is A Solution AND A Problem 

The fourth naïveté concerns the social nature of technology. Technology may be a solution, yes, but it is not only that. It is also a package of unintended risks and consequences that need to be managed by society. The most infamous example is the Bhopal disaster in India in 1984. A leak of toxic gas at Union Carbide’s pesticide plant killed thousands and injured hundreds of thousands more. It was the 20th century’s worst industrial accident. 

The intended consequence was to catapult India’s farmers into the future. A Green Revolution was underway! New, chemical fertilizers were lifting crop yields to never-before-seen heights! By importing the latest fertilizer technology from the U.S., India would join that Revolution and banish the specter of mass starvation from its borders. 

Instead, the disaster demonstrated how badly wrong things can go when we transfer the material components of a technology from one society to another, but ignore its social components: risk assessment, regulation, ethics, and public awareness and participation. 

In short: a society is a great big, complex system. Technology is just one input. Other inputs, stocks, flows and feedback loops also determine what the eventual outputs will be. 

Technology = Progress?

The deepest naïveté—the belief lurking in the background of all the above—is that technological change is a good thing.

This is one of the Biggest Ideas of our time—and also one of the least questioned…

To be continued next Sunday…

Map #31: A.I. — Humanity’s Lab Rat Or Robot Overlord?

Map #31: A.I. — Humanity’s Lab Rat Or Robot Overlord?

’I have always believed that any scientific concept can be demonstrated to people with no specialist knowledge or scientific education.’ Richard Feynman, Nobel physicist (1918-1988)

I feel like the whole field of AI research could take a cue from Richard Feynman. Why is something that computer scientists, businesses, academics and politicians all hype as “more important to humanity than the invention of fire” also so poorly explained to everyone? How can broader society shape the implications of this new Prometheus if no one bothers to frame the stakes in non-specialist terms?

This is a project that I’m working on, and I’d love to hear your thoughts on my initial attempts. For starters, in this letter I want to map out, in non-specialist language, where AI research has been and where it is today. 

The Temple of Geekdom

I had coffee recently with the head of AI for Samsung in Canada, Darin Graham. (He’s heading up one of the five new AI hubs that Samsung is opening globally; the others are in Silicon Valley, Moscow, Seoul and London.)

Hanging out with a bona fide guru of computer science clarified a lot of things for me (see this letter). But Darin demanded a steep price for his wisdom: he made me promise to learn a programming language this year, so that the next time we talk I can better fake an understanding of the stuff he’s working on. 

Given my chronic inability to say “No” to people, I now find myself chipping away at a certification in a language called Python (via DataCamp.com, which another guru-friend recommended and which is excellent). Python is very popular right now among data scientists and AI developers. The basics are quick and easy to learn. But what really sold me on it was the fact that its inventor named it after Monty Python’s Flying Circus. (The alternatives—C, Java, Fortran—all sound like they were invented by people who took themselves far too seriously.) 

To keep me company on my pilgrimage to the temple of geekdom, I also picked up a couple of the latest-and-greatest textbooks on AI programming: Fundamentals of Deep Learning (2017) and Deep Learning (2018). Again, they clarified a lot of things.

(Reading textbooks, by the way, is one of the big secrets to rapid learning. Popular books are written in order to sell copies. But textbooks are written to teach practitioners. So if your objective is to learn a new topic, always start with a good textbook or two. Reading the introduction gives you a better grounding in the topic than reading a hundred news articles, and the remaining chapters take you on a logical tour through (a) what people who actually work in the field think they are doing and (b) how they do it.) 

Stage One: Telling Computers How (A.I. As A Cookbook)

Traditional computer programming involves telling the computer how to do something. First do this. Now do that. Humans give explicit instructions; the computer executes them. We write the recipe; the computer cooks it. 

Some recipes we give to the computer are conditional: If THIS happens, then do THAT. Back in the 1980s and early 1990s, some sectors of the economy witnessed an AI hype-cycle very similar to the one we’re going through today. Computer scientists suggested that if we added enough if…then decision rules into the recipe, computers would be better than mere cooks; they’d be chefs. Or, in marketing lingo: “expert systems.” After all, how do experts do what they do? The answer (it was thought) was simply: (a) take in information, then (b) apply decision rules in order to c) reach a decision. 

It seemed a good job for computers to take over. Computers can ingest a lot more information, a lot faster, than any human can. If we can tell them all the necessary decision rules (if THIS…, then THAT), they’ll be able to make better decisions, faster, than any human expert. Plus, human expertise is scarce. It takes a long time to reproduce—years, sometimes decades, of formal study and practical experience. But machines can be mass manufactured, and their software (i.e., the cookbooks) can be copied in seconds.

Imagine the possibilities! Military and big business did just that, and they invested heavily into building these expert systems. How? By talking to experts and watching experts in order to codify the if THIS…, then THAT recipes they followed. A new age of abundant expertise lay just around the corner.

Or not. Most attempts at the cookbook approach to computerizing expertise failed to live up to the hype. The most valuable output from the “expert system” craze was a fuller understanding, and appreciation, for how experts make decisions. 

First, it turned out that expert decision rules are very hard to write down. The first half of any decision rule (if THIS…) assumes that we’ve seen THIS same situation before. But experts are rarely lucky enough to see the same situation twice. Similar situations? Often. But ‘the same’? Rarely. The value of their expertise lies in judging whether the differences they perceive are relevant—and if so, how to modify their decision (i.e., …then THAT) to the novelties of the now. 

Second, it turned out that there’s rarely a one-to-one relationship between THIS situation and THAT decision. In most domains of expertise, in most situations, there’s no single “right” answer. There are, instead, many “good” answers. (Give two Michelin-starred chefs the same basket of ingredients to cook a dish, and we’d probably enjoy either one.) We’ll probably never know which is the “best” answer, since “best” depends, not just on past experience, but on future consequences—and future choices—we can’t yet see. (And, of course, on who’s doing the judging.) That’s the human condition. Computers can’t change it for us.

Human expertise proved too rich, and reality proved too complex, to condense into a cookbook. 

But the whole venture wasn’t a complete failure. “Expert systems” were rebranded as “decision support systems”. They couldn’t replace human experts, but they could be valuable sous-chefs: by calling up similar cases at the click of a button; by generating a menu of reasonable options for an expert to choose from; by logging lessons learned for future reference. 

Stage Two: Training Computers What (From Cooks to Animal Trainers)

Many companies and research labs that had sprung up amidst the “expert system” craze went bust. But the strong survived, and continued their research into the 2000s. Meanwhile, three relentless technological trends transformed the environment in which they worked, year by year: computing power got faster and cheaper; digital connections reached into more places and things; and the production and storage of digital data grew exponentially.

This new environmental condition—abundant data, data storage, and processing power—inspired a new approach to AI research. (It wasn’t actually ‘new’; the concept dated back to at least the 1950s. But the computing technology available then—knobs and dials and vacuum tubes—made the approach impractical.) 

What if, instead of telling the computer exactly how to do something, you could simply train it on what to do, and let it figure out the how by itself? 

It’s the animal trainer’s approach to AI. Classic stimulus-response. (1) Supply an input. (2) Reward the outputs you want; punish the outputs you don’t. (3) Repeat. Eventually, through consistent feedback from its handler, the animal makes its own decision rule—one that it applies whenever it’s presented with similar inputs. The method is simple but powerful. I can train a dog to sit when it hears the sound, “Sit!”; or point when it sees a bird in the grass; or bark when it smells narcotics. I could never tell the dog how to smell narcotics, because I can’t smell them myself. But I don’t need to. All I need to do is give the dog clear signals, so that it infers the link between its behaviors and the rewards/punishments it receives. 

This “Machine Learning” approach has now been used to train systems that can perform all kinds of tricks: to classify an incoming email as “spam” or not; to recognize objects in photographs; to pick out those candidates most likely to succeed in Company X from a tall pile of applicants; or (here’s a robot example) to sort garbage into various piles of glass, plastic and metal. The strength of this approach—training, instead of telling—comes from generalization. Once I’ve trained a dog to detect narcotics, using some well-labelled training examples, it can apply that skill to a wide range of new situations in the real world. 

One big weakness of this approach—as any animal or machine trainer will tell you—is that training the behavior you want takes a lot of time and effort. Historically, machine trainers have spent months, years, and sometimes decades of their lives manually converting mountains of raw data into millions of clear labels that machines can learn from—labels like “This is a narcotic” and “This is also a narcotic”, or “This is a glass bottle” and “This is a plastic bottle.” 

Computer scientists call this training burden “technical debt.” 

I like the term. It’s intuitive. You’d like to buy that mansion, but even if you had enough money for the down-payment, the service charges on the mortgage would cripple you. Researchers and companies look at many Machine Learning projects in much the same light. Machine Learning models look pretty. They promise a whole new world of automation. But you have to be either rich or stupid to saddle yourself with the burden of building and maintaining one.

Another big weakness of the approach is that, to train the machine (or the animal), you need to know in advance the behavior that you want to reward. You can train a dog to run into the bushes and bring back “a bird.” But how would you train a dog to run into the bushes and bring back “something interesting”???

From Animal Trainers to Maze Architects

In 2006, Geoff Hinton (YouTube) and his AI research team at the University of Toronto published a seminal paper on something they called “Deep Belief Networks”. It helped spark a new subfield of Machine Learning called Deep Learning. 

If Machine Learning is the computer version of training animals, then Deep Learning is the computer version of sending lab rats through a maze. Getting an animal to display a desired behavior in response to a given stimulus is a big job for the trainer. Getting a rat to run a maze is a lot easier. Granted, designing and building the maze takes a lot of upfront effort. But once that’s done, the lab technician can go home. Just put a piece of cheese at one end and the rat at the other, and the rat trains itself, through trial-and-error, to find a way through. 

This “Deep Learning” approach has now been used to produce lab rats (i.e., algorithms) that can run all sorts of mazes. Clever lab technicians built a “maze” out of Van Gogh paintings, and after learning the maze the algorithm could transform any photograph into the style of Van Gogh. A Brooklyn team built a maze out of Shakespeare’s entire catalog of sonnets, and after learning that maze the algorithm could generate personalized poetry in the style of Shakespeare. The deeper the maze, the deeper the relationships that can be mimicked by the rat that runs through it. Google, Apple, Facebook and other tech giants are building very deep mazes out of our image, text, voice and video data. By running through them, the algorithms are learning to mimic the basic contours of human speech, language, vision and reasoning—in more and more cases, well enough that the algorithm can converse, write, see and judge on our behalf. (Did you all seen the Google Duplex demo last week?)

There are two immediate advantages to the Deep Learning approach—i.e., to unsupervised, trial-and-error rat running, versus supervised, stimulus-response dog training. The obvious one is that it demands less human supervision. The “technical debt” problem is reduced: instead of spending years manually labelling interesting features in the raw data for the machines to train on, the rat can find many interesting features on its own. 

The second big advantage is that the lab rat can learn to mimic more complicated, more efficient pathways than a dog trainer may even be aware exists. Even if I could, with a mix of rewards and punishments, train a dog to take the path that I see through the maze, Is it the best path? Is it the only path? What if I myself cannot see any path through the maze? What if I can navigate the maze, but I can’t explain, even to myself, the path I followed to do it? The maze called “human language” is the biggest example of this. As children, we just “pick up” language by being dropped in the middle of it. 

 

So THAT’S What They’re Doing

No one seems to have offered this “rat in a maze” analogy before. It seems a good one, and an obvious one. (I wonder what my closest AI researcher-friends think of it—Rob, I’m talking to you.) And it helps us to relate intuitively with the central challenge that Deep Learning researchers (i.e., maze architects) grapple with today:

Given a certain kind of data (say, pictures of me and my friends), and given the useful behavior we want the lab rat to mimic (say, classify the pictures according to who’s in them), what kind of maze should we build? 

Some design principles are emerging. If the images are full color, then the maze needs to have at least three levels (Red, Green, Blue), so that the rat learns to navigate color dimensions. But if the images are black-and-white, we can collapse those levels of the maze into one. 

Similarly, if we’re dealing with data that contains pretty straightforward relationships (say, Column A: historical data on people’s smoking habits and Column B: historical data on how old they were when they died), then a simple, flat maze will suffice to train a rat that can find the simple path from A to B. But if we want to explore complex data for complex relationships (say, Column A: all the online behaviors that Facebook has collected on me to-date and Column B: a list of all possible stories that Facebook could display on my Newsfeed today), then only a multi-level maze will yield a rat that can sniff out the stories in Column B that I’d click on. The relationship between A and B is multi-dimensional, so the maze must be multi-dimensional, too. Otherwise, it won’t contain the path. 

We can also relate to other challenges that frustrate today’s maze architects. Sometimes the rat gets stuck in a dead-end. When that happens, we either need to tweak the maze so that it doesn’t get stuck in the first place, or teleport the rat to some random location so that it learns a new part of the maze. Sometimes the rat gets tired and lazy. It finds a small crumb of cheese and happily sits down to eat it, not realizing that the maze contains a giant wheel of cheese—seven levels down. Other times, the rat finds a surprising path through the maze, but it’s not useful to us. For example, this is the rat that’s been trained to correctly identify any photograph taken by me. Remarkable! How on earth can it identify my style…of photography!?! Eventually, we realize that my camera has a distinctive scratch on the lens, which the human eye can’t see but which the rat, running through a pixel-perfect maze, finds every time. 

 

Next Steps

These are the analogies I’m using to think about different strands of “AI” at the moment. When the decision rules (the how) are clear and knowable, we’ve got cooks following recipe books. When the how isn’t clear, but what we want is clear, we’ve got animal trainers training machines to respond correctly to a given input. And when what we want isn’t clear or communicable, we’ve got maze architects who reward lab rats for finding paths to the cheese for us. 

In practice, the big AI developments underway use a blend of all three, at different stages of the problem they’re trying to solve.

The next question, for a future letter, is: Do these intuitive (and imperfect) analogies help us think more clearly about, and get more involved in, the big questions that this technology forces society to confront. 

Our experience with “expert systems” taught us to understand, and appreciate, how human experts make decisions more fully. Will our experience with “artificial intelligence” teach us to understand and appreciate human intelligence more fully?

Even the most hyped-up AI demonstration right now—Google Duplex—is, essentially, a lab rat that’s learned to run a maze (in this case, the maze of verbal language used to schedule appointments with other people). It can find and repeat paths, even very complicated ones, to success. Is that the same as human intelligence? It relies entirely upon past information to predict future success. Is that the same as human intelligence? It learns in a simplified, artificial representation of reality. Is that the same as human intelligence? It demands that any behavior be converted into an optimization problem, to be expressed in numerical values and solved by math equations. Is that the same as human intelligence? 

At least some of the answers to the above must be “No.” Our collective task, as the generations who will integrate these autonomous machines into human society, to do things and make decisions on our behalf, is to discover the significance of these differences. 

And to debate them in terms that invite non-specialists into the conversation.

 

Brave voyages,

Chris

Map #30: Violence, Conformity and a Toronto Van Attack

Map #30: Violence, Conformity and a Toronto Van Attack

I was going to write about something completely different this week. Then I saw the news headline that a van had plowed through a mile of sidewalk in Toronto, killing nearly a dozen people and injuring dozens of others. It suddenly felt wrong to continue with the work I had been doing, and it felt right to meditate on the meaning and the causes of what had just happened.

That profound sense of wrongness was itself worth thinking about. Here I am in London, England, staring out my window at another wet spring morning. When, last year, a van plowed into pedestrians on London Bridge, or when a car plowed through pedestrians outside the Houses of Parliament, or when a truck drove through a Berlin Christmas market, or through a crowd of tourists in Nice, I read the headlines, I expressed appropriate sympathy, and then I went on about my business.

In my letter a couple weeks ago, I shared a quote from Jonathan Sacks, who said that, “The universality of moral concern is not something we learn by being universal but by being particular.” It’s when violence shatters the lives of my fellow Canadians that I am deeply touched by the inhumanity of the act. I understand these words better, now.

To reach some deeper insights into what happened in Toronto, and into similar events in the past and yet to come, I picked up The Human Condition (1958), by Hannah Arendt. Hannah Arendt (1906-1975) was one of the biggest political thinkers of the 20th century. From her first major book in the 1950s onward, she tried to make sense of a lot of the same things that we are all trying to wrap our heads around today: the rise of authoritarian regimes, the consequences of automation, the degradation of our politics, and the tensions between what consumer society seems to offer us and what we actually need to be fulfilled by life.

I came away from her book with some helpful insights about three or four big topics in the public conversation right now.

I’ve only got space here to reflect on last week’s van attack. But the biggest takeaway for me from this book was that, whatever horror hits the day’s headlines, if we can, through these events, grow our understanding of the human condition, then we will go to sleep stronger than when we woke up in the morning.

And that, at least, is something positive.

Brave voyages, to us all,

Chris

 

Hannah’s Big Idea in 500 words

To grasp Hannah’s insights into our present-day headlines, we need to give her the courtesy of 500 words to tell us her big idea.

In a nutshell, her big idea is that all human doing falls into one of three buckets: labor, work and action. Now for most of us, these three words kinda mean the same thing; their definitions blur and overlap. But Hannah says, No, no, no, the differences between these words mean everything. The better we grasp their distinctions, the more clearly we will grasp the human condition—and, by extension, why everything is happening.

Work is like craftsmanship or art. We are at work when, like the sculptor, we start with an idea and turn it into reality. Work has a definite beginning and end. When the work is over, we have added something new to the world.

To Hannah’s mind, the Industrial Revolution largely destroyed work, destroyed craftsmanship, so that today it’s really only artists who experience the intrinsic value of having made something.

In the modern world, most of us don’t work anymore. Instead we labor. Labor has neither a beginning nor an end. It is an unending process of producing stuff that is consumed so that more stuff can be produced and consumed. Most labor works on only a piece of the whole, with only a faint grasp of that whole, and only a faint or no intrinsic satisfaction for having contributed to that whole. As laborers, we do not make things; we make a living. And as the cliché goes: when we’re old and grey and look back on our lives, we won’t remember the time we spent at the office. Why? Because that time was largely empty of intrinsic value; it was empty of stuff worth remembering. (Hannah could be a bit dark at times.)

Action is for Hannah, the highest mode of human doing. To act is to take an initiative. To begin. To set something in motion. Unlike work, which is the beginning of something, action is the beginning of someone. It is how we distinguish ourselves and make apparent the plurality of human beings. If our labor reveals what we are (lawyer, banker, programmer, baker), then our actions reveal who we are. Through our words and deeds, we reveal our unique identity and tell our unique story. Action has a beginning, but it has no end—not one that we can see, anyway, because its consequences continue to unfold endlessly. (In her glorification of speech and action, Hannah let’s slip her love affair with ancient Greece. More on that later.)

In short, for Hannah the whole human condition can be understood in the distinctions and conflicts that exist between labor, work and action. Through that lens, I think she would say the following about the Toronto van attack.

 

This Toronto man was reaching for immortality

Hannah, a Jew, quit Nazi Germany in 1933. She knew a lot about the horrors of violence, she studied violence, and she strove to understand it.

Hannah, I think, would have zeroed in on the driver’s desire to commit “suicide by cop,” and the consequence of his failure to do so. She wrote:

The essence of who somebody is can come into being only when life departs, leaving behind nothing but a story. Whoever consciously aims at being “essential,” at leaving behind a story and an identity which will win “immortal fame,” must not only risk his life but expressly choose (as, in Greek myth, Achilles did) a short life and premature death. 

Only a man who does not survive his one supreme act remains the indisputable master of his identity and possible greatness, because he withdraws into death from the possible consequences and continuation of what he began. 

But, because of the restraint shown by the arresting officer, the man was denied the privilege of writing his own ending. He remains alive to face the unfolding consequences of “his one supreme act.” Instead of summing up his whole life in that one single deed, his story will continue to unfold, piecemeal. With each fresh, unintended page, he will be less the author, and more the witness, to his own place in history. He sought to win immortal fame. Instead he will live, be locked away, and be forgotten.

Those who feel weak, get violent

What about this “incel” movement—this involuntary celibacy stuff—which seems to have inspired the man’s rampage? Hannah wrote:

The vehement yearning for violence is a natural reaction of those whom society has tried to cheat of their strength.

Hannah thought of strength as something that individuals possess. She distinguished strength from power, which is something that people possess—but only when they act together. In a contest between two individuals, strength (mental or physical) decides the winner. In a contest between two groups, power—not numbers—decides the winner. (That’s why history is full of examples of small but well-organized groups of people ruling over giant empires.)

But in a contest between individual strength and collective power, collective power always wins. We saw this truth in the aftermath of the Toronto attack: the public’s coming together in the wake of the driver’s rampage showed just how helpless he is, whatever weapon he might wield, to change the way things are.

We’ve also see this truth on a larger scale, Hannah argued, in “passive resistance” movements like that of Mahatma Gandhi.

Popular revolt against strong rulers can generate an almost irresistible power—even if it foregoes the use of violence in the face of vastly superior material forces….

To call this “passive resistance” is certainly ironic; it is one of the most powerful ways of action ever devised. It cannot be countered by fighting—the other side refuses to fight—but only by mass slaughter (i.e., violence). If the strong man chooses the latter, then even in victory he is defeated, cheated of his prize, since nobody can rule over dead men.

Hannah’s point is, individual strength is helpless against collective power. For some groups, that’s been the secret to liberation: liberation from the strongman, the tyrant, the dictator. For some individuals, that’s been the explanation for their imprisonment: imprisonment to social norms, to shifting values, to their own sense of impotence.

How do we build a society of healthy individuals?

“Individual strength is helpless against collective power.” If Hannah was right about that, then the big question we need to ask ourselves is: How do we build a society of healthy individuals—a society that doesn’t suffocate, but instead somehow celebrates, individual strength?

To be sure, “involuntary celibates” who rage against their own frustrations are maladjusted, and they need to bear their own guilt for that. But, Hannah would argue, explosions of violence in our midst also remind us that the power of the group to make us conform is awesome.

So how do we each assert our uniqueness within society? It’s a deadly serious question.

 

The Greek solution

For Hannah, who saw only three basic choices in front of each of us—labor, work or action—the only hope for modern man and woman to assert their individuality lay in the last: action. Labor is the activity that deepens our conformity, until even our basic biological choices of when to sleep and when to eat are set by the rhythm of the economic machine. And the industrial revolution destroyed whatever private satisfactions once existed in the craftsman’s “work”.

So we’re left with action. And, Hannah mused, we’ve got two choices: either we create a public arena for action to be put on display, or people will create it on their own. The Toronto van attacker did the latter.

The ancient Athenians did the former. Their public arena for action was the political arena, the polis. In our modern economic mind, we tend to think of the political arena as an unproductive space where “nothing gets done.” But to the ancient Athenians, it was the primary public space where individual achievements were asserted and remembered.

The polis was supposed to multiply the occasions to win “immortal fame,”—that is, to multiply the chances for everybody to distinguish himself, to show in deed and word who he was in his unique distinctness. 

The polis was a kind of “organized remembrance,” so that “a deed deserving fame would not be forgotten.” Unlike the products of human labor and work, the products of human action and speech are intangible. But through the polis, they became imperishable.

For ancient Athenians, the political arena squared the circle. It transmuted society’s awesome power to force conformity into a shared celebration of individual strength. Or, as the philosopher Democritus put it in the 400s BC:

“As long as the polis is there to inspire citizens to dare the extraordinary, all things are safe; if it perishes, everything is lost.” 

 

How do WE square the circle?

Unfortunately, when it comes to the healthy assertion of individuality today, we modern people have painted ourselves into a corner. And we keep painting it smaller.

Politics isn’t a viable place for us to immortalize deeds anymore, because (in a complete reversal of how the ancient Greeks saw it) politics produces nothing for consumption—and is therefore unproductive.

But our primary productive activity—labor—is, likewise, empty of distinguishing deeds that we might want to immortalize. (Again, that seems to be the truth we all admit on our deathbeds.)

Maybe one day soon, when automation frees us from the need to perform any labor at all, we will then use our abundant time to develop our individual excellences. Hannah had grave doubts about that. More likely, having already formed the habit of filling our spare time with consumption, we will, given more spare time, simply consume more—more food, more entertainment, more fuel for our appetites.

The danger is that such a society, dazzled by its own abundance, will no longer be able to recognize its own emptiness—the emptiness of a life which does not fix or realize anything which endures.

Even our “private lives”—where we should feel no need to assert our individuality against the group at all—are now saturated with the need to perform for others.

The conclusion, which would be really funny if it weren’t so serious, is that for all our runaway individualism, it may be that modern society suffers from a crisis of individuality.

 

Hold onto the dilemma

Hannah didn’t solve these problems for us. (I guess that’s why she called this book The Human Condition.) But she did frame two powerful questions that can help us respond meaningfully to senseless acts of violence:

  1. How can I take part in the power of people acting together?
  2. What is the arena in which I celebrate individual distinctiveness—mine, and others?

This dilemma isn’t going away. ’Squaring this circle’ between social conformity and individual strength is one of the big, underlying projects of our time.

 

Map #29: False Choices (‘Should Science Or Values Take Priority?’) 

Map #29: False Choices (‘Should Science Or Values Take Priority?’) 

I Hate False Choices

As many of you know, I’m actively exploring routes to re-engage with my Canadian…er…roots. One of those routes is a policy fellowship program. I recently went through the application process, which included responding to the following question:

 In policy making, science and evidence can clash with values and perspectives. What should take precedence and why?

False choices like this one are a special hate of mine. So I thought I’d share my brief rant with you all. (If you share my special hate, or hate my special rant, let’s chat!)

</begin rant>

The premise that “science and evidence” stand in opposition to “values and perspectives” is fatal to the project of liberal democracy.

The ultimate consequence of this premise is precisely the crisis that we now face in our politics today—namely, the emergence of new, competing truth machines to support value-based policy agendas that were consistently denied validity by the truth machine of “science and evidence.”

This error goes all the way back to the Enlightenment, when we separated science from values, elevated science to the status of higher truth and gave science a privileged, value-free position from which to survey the unenlightened.

That act of hubris planted the seed for the present-day rebellion by every value that is “held” (and therefore is real at some level) yet denied the status of reality.

So long as we frame this tension between science and values as a clash, as an either/or that must be decided in favor of one side or the other, this rebellion will spread until it enfeebles, not just policy-making, but the whole liberal democratic project.

If science and evidence ought to take precedence, then logic will ultimately lead us to the China model of “democratic dictatorship.” There, the people chose a government in 1949 (via popular revolution), and it has been running policy experiments and gathering evidence ever since. Some experiments have been successful, some spectacularly not-so, but the Party retains the authority to learn, to adapt and to lead the people, scientifically, toward a material utopia. By force, when necessary.

If, instead, values and perspectives ought to take precedence, then far from wringing our hands at the proliferation of “fake news” and “alternative truths,” we should celebrate it. Now every value, not just Enlightenment values, has a truth machine that spews out authority, on the basis of which groups that hold a particular value can assert righteousness. Excellent! But then we ought to strike the “liberal” from liberal democracy, since constraints upon the exercise of authority have no privileged ground to stand on.

The only way to avoid these inevitable destinations of the either/or premise is to reintegrate science and value—at the level of policy-making and public discourse. We need a New Enlightenment. That is the task which the irruption of post-truth politics now makes urgent. To accomplish it, the Enlightened must first wake up to the values and intuitions that underlie our questing for evidence. For example: a sense that harmony with nature or with strangers is good; a conviction that we ought to preference the welfare of future generations over our own; a feeling that security is a good in itself.

We, the Enlightened, must reverse the order in which we validate our values—from “Here is the evidence, and therefore this is what we should value” to “Here is what I value, and here is the evidence that helps explain why it’s good.”

</end rant>

Map #28: A Higher Loyalty?

Map #28: A Higher Loyalty?

I was in Washington, D.C. this past week—the talk-shop capital of the world. I attended a conference on the future of war and spoke at a conference on the future of energy. In between, I took in Mark Zuckerberg’s hearings on Capitol Hill—and even found time to binge on a season of West Wing. (In the 2000s, it was serious political drama. Now, it’s good comedy. What seemed scandalous for the White House fifteen years ago looks so cute today.)

Sucked Into D.C. Coffee Row

I’ve been trying to get my head out of politics for the last couple of weeks, but in D.C. that’s impossible. The first question everyone asks you is, “So, what do you do?” (Here, networking is a way of life.) Then there’s a mandatory 10-minute conversation about the last Trump-smacker: his latest Tweet, or the latest story to break on Politico or The Hill or NYT or WaPo. (This is a city that votes 90% Democrat.) Then they ask you your name.

Other than Zuck’s Facebook testimony, the biggest story on everyone’s lips in D.C. this past week was A Higher Loyalty, the forthcoming book by former FBI Director James Comey. Technically it’s an autobiography of Comey’s full career in law enforcement, but most people are only interested in the last chapter—his time with, and firing by, Donald Trump.

The title references the now infamous, intimate ‘loyalty dinner’ that Comey attended at the White House mere days after Trump’s inauguration. Trump allegedly asked his FBI Director to pledge his loyalty, and Comey, demurring, pledged ‘honesty’ instead.

A few months later, in May 2017, Trump fired Comey. That action prompted the Justice Department to appoint a Special Counsel to look into Trump’s Russia connections (if any), and here we still are, a year later, gobbling up every scrap of this story as fast as it emerges.

The release of Comey’s book this week marks another feeding frenzy. And while the talking heads on MSNBC and Fox News each push their particular narratives, the bigger question will be ignored completely: Is there something ‘higher’—higher to which all members of a society (even the democratically elected leader) owe loyalty? And if so, what is that thing?

This is a really good, really timely question.

The Constitution Isn’t High Enough

The obvious (but, I think, wrong) answer is ‘the constitution’. The U.S. constitution allows a two-thirds majority of the Senate to remove a president who has committed ‘Treason, Bribery, or other High Crimes and Misdemeanors.’ Democrats in this town dream that one day Special Counsel Robert Muller’s investigation will find a smoking gun under Trump’s pillow, leaving the Senate—and the American people—no choice but to evict, and convict, The Donald.

More likely, I think, Muller’s investigation will find ‘evidence of wrong-doing’—something in between ‘good’ and ‘evil’. And everyone will be just as divided as before—or, more likely, present divisions will worsen—because the process and the law leave ample room for judgment and interpretation. Was the investigative process ‘fair’? Can we ‘trust’ the process? And even if we do trust the process, does the ‘wrong-doing’ rise to the level of a ‘High Crime’—high enough to overturn the voters’ choice from 2016?

If there is to be something to which members of a society owe a ‘Higher Loyalty,’ it must be something above a country’s constitution. It must be that high place upon which we stand when the constitution is read.

Sociologists talk about trust. Economists talk about social capital. Biologists talk about the evolutionary advantages of cooperation. Political scientists talk about civil society. Lawyers talk about the distinction between ‘ethical’ and ‘legal’. Comey talks about a ‘higher loyalty’. They’re all investigations of the same idea: a healthy society depends upon more than its rules. It also depends upon a shared sense of why the rules matter.

But in a democracy, what’s higher than the constitution?

This Is Getting Biblical

One answer is: the covenant.

Constitutions define states; covenants define societies. Constitutions define the political, economic and legal system; covenants define the moral context in which these systems operate. Constitutions are contracts; covenants are the relationships of social life—relationships that, like ‘family’, cannot be reduced to legal language and market exchanges.

Among this Readership are heavyweights in political theory, constitutional law and sociology, and I sent out a little survey asking a few of you for good books that dig deeper into this idea of ‘covenant’. The #1 recommendation I got back was The Dignity Of Difference, by Jonathan Sacks. At first I was unsure—Jonathan Sacks is one of the most senior rabbis in the Jewish world, and I didn’t want to confuse his religious idea of covenant with the public idea of covenant. But it turns out that Jonathan spends a lot of time thinking about the latter.

In A Higher Loyalty, Comey talks about a looming constitutional crisis. Jonathan would say: Comey is mistaken. America doesn’t face a constitutional crisis; it faces a covenantal crisis. The latter is different, and deeper. It is a crisis over the question: What are the values that govern our society? 

The Best Things In Life Are Never Self-Evident

America’s covenant is not its constitution (signed in 1789), but its Declaration of Independence (signed in 1776). That earlier document famously begins:

We hold these truths to be self-evident: that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

The irony, of course, is that these truths are anything but self-evident. Throughout most societies in most of history, the social order has rested upon the idea that all people are not created equal. What the signatories of that Declaration really meant to say was, “Rather than rest upon the ideas of the past, we are going to build a new society upon the idea that each person (i.e., ‘white man’) is owed an equal measure of human dignity.” It took more than a decade of further debate to encode that social ideal into a state constitution.

The Declaration of Independence was a declaration of the moral objective toward which American society should strive. It’s echoed in the Preamble of the U.S. Constitution:

We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution…

These blocks of text thrum with shared moral meaning. They are among the best-known sentences in the English language. What has happened to throw this covenant into crisis?

And again, the obvious answer that many people give (‘Donald Trump’) is wrong.

For A Covenant To Hold, We Must Dignify Difference

I think Jonathan would say that the warring narratives between Fox News and MSNBC, between Trump advocates and Trump haters, are a reflection of what ultimately happens when we erode the boundary between politics and religion.

For a society’s covenant to remain strong and healthy, Jonathan argues, these two spheres of social life each need a separate space to play their respective roles. Religion (and by ‘religion’, Jonathan really mean all forms of deeply felt group association) is supposed to be the space in which we build identity, community and solidarity. Politics is supposed to be the space in which we work out the differences that inevitably develop between these groups.

We need both. We are social beings. We are meaning-seekers. Group association is an important part of how we become ourselves—and become good citizens. (‘The universality of moral concern is not something we learn by being universal but by being particular.’ – Jonathan Sacks)

But we also need politics. Precisely because so much of our meaning and identity arises from experiences within our particular group, our own meanings and identities will never be universally shared. Living together requires a layer of cooperation that straddles these differences.

Society starts to get ugly whenever these two spheres of social life (the space where we belong, and the space where we cooperate) collapse into one.

When religion is politicized, God takes over the system. When politics turns into a religion, the system turns into a God.

Either way, Jonathan explains, respect for difference collapses. When religion is politicized, outsiders (non-believers) are denied rights. The chosen people become the master-race. When politics turns into a religion, outsiders (non-comformers) are granted rights if and only if they conform (and thus cease to be an outsider). The truth of a single culture becomes the measure of humanity.

Progressives vs Reversers

These concepts (thank you, Jonathan) offer us a fresh way of thinking about what the heck is going on in U.S. politics at the moment.

Is it possible that Democrats have been guilty of turning politics into a religion that demands conformity? Yesterday a New York Times op-ed talked about how many Democrats have stopped talking about themselves as ‘liberals’ (because it now carries tainted connotations in U.S. discourse), and substituted the word ‘progressive’ instead.

The distinction matters. Says the op-ed writer, Greg Weiner:

‘Progressives’ are inherently hostile to moderation because progress is an unmitigated good. There cannot be too much of it. For ‘progressives’, compromise (which entails accepting less progress) is not merely inadvisable but irrational. The critic of progress is not merely wrong but a fool.

 

Because progress is an unadulterated good, it supersedes the rights of its opponents.

 

This is one reason progressives have alienated moderate voters who turned to Donald Trump in 2016. The ideology of progress tends to regard the traditions that have customarily bound communities and which mattered to Trump voters who were alarmed by the rapid transformation of society, as a fatuous rejection of progress.

Likewise, is it possible that Republicans have been guilty of turning religion (be it guns or Jesus) into a test of citizenship—a test that demands conversion?

I think maybe yes. And if so, that’s the fundamental problem, because both sides are doing something to denigrate difference. Unless we all dignify difference, no social covenant can hold. Says Jonathan:

‘Covenants exist because we are different and seek to preserve difference, even as we come together to bring our several gifts to the common good.

…This is not the cosmopolitanism of those who belong nowhere, but the deep human understanding that passes between people who, knowing how important their attachments are to them, understand how deeply someone else’s different attachments matter to them also.’

Re-Founding America

Having just spent a whole week in Washington, D.C., I can breezily say that the best way to heal America’s divisions is for everyone to go back to Philadelphia, hold hands together, and rededicate themselves to their shared moral project: to recognize human equality and oppose sameness.

Of course, I doubt either side is ready to lay down arms and make a new covenant just yet. War is clarifying. It divides the world into us and them. Peace is the confusing part. It provokes a crisis of identity. In order to make peace with the other, we must find something in common between us, worthy of mutual respect.

Right now that’s a tall order—not just in U.S. politics, but in the domestic politics across the democratic world. (‘Populist’ isn’t a label that’s intended as a sign of respect.) But maybe all of us can start asking some good questions, wherever we are, that (a) make us sound smart, but also (b) start the conversation in our community about the covenant to which we all owe a ‘higher loyalty’:

1. Tribalism (i.e., my particular group’s ideas dominate) won’t work. Universalism (i.e., one human truth overrides my particular group’s ideas) won’t work either. So what can?

2. ‘Those who are confident in their faith are not threatened but enlarged by the different faith of others.’ (Jonathan Sacks) Are we feeling threatened? Other than attacking the other, is there another way to restore our confidence?

3. Is all this week’s coverage of James Comey’s new book helping to bring people closer to his main message (‘There’s something higher that spans our differences and makes us one society’), or drawing us further away?

 

Admittedly, that last question is purely rhetorical—we all know the answer—but somehow a list feels incomplete unless it has three items. 🙂

Brave voyages, to us all,

 

Chris

Map #27: Fixing Fake News

Map #27: Fixing Fake News

I wonder, do you ever share my feeling that ‘fake news’ and ‘post-truth’—these phrases that get thrown about every day by the commentariat—cloud our understanding rather than clarify it? To me, such phrases—frequently used, fuzzily defined—are like unprocessed items in the inbox of my brain. I pick them up. I put them down. I move them from one side of my desk to the other, without ever really opening them up to make sense of what they are and what to do with them.

One of my friends, Dr Harun Yilmaz, finally got tired of my conceptual fuzziness, and he and a colleague wrote a brief book to tell me what fake news is, how the post-truth era has come about, and how to win public trust nowadays—for power or profit—now that the old trust engines are broken. It’s now become my little bible on the subject. (And although I’d rather keep the insights all to myself, he’s just published it as an affordable e-book called Marketing In The Post-Truth Era.)

(I’ve never met Harun’s co-author, Nilufar Sharipova, but Harun and I go back many years. We did our PhDs side-by-side at Oxford. While I studied China, he studied the Soviet Union—specifically, how Soviet politicians and historians constructed national stories to give millions of people an imaginary, shared past that helped explain why the USSR belonged together. (I’m sure his thesis was very interesting, but I mostly remember how Harun bribed his way past local Kazakh officials with bottles of vodka to peek into their dusty Soviet archives.)

Harun’s been studying ‘fake news’ and ‘post-truth’ since it was still good ol’ ‘propaganda’—and that, from the very best in the business.

The Rise Of Fake News, In Three Key Concepts

1. The Truth Machine

To understand the post-truth era, Harun would say, we first need to understand the prior, truth era. Even back in the truth era, pure truth or real news never existed. Whether we judged a message to be true depended on (a) how the editor of the message presented it and (b) how we the viewer perceived it.

Here’s a visual example of (a) from the Iraq War. With the same picture (middle frame), I can present two completely different messages, depending on how I crop it.

Everything you read in a newspaper or hear on a radio, every question asked and answered, is the outcome of a human decision to accord it priority over another item. (Simon Jenkins, Columnist, The Guardian)

What about (b)? How did we perceive ‘the news’ in the era before some people started calling it ‘fake’? The honest answer—for me, at least—was that I mostly took the news to be something ‘real’.

In the post-truth era, when ‘fake news’ has now become a frequent problem, ‘critical thinking’ has become a frequent antidote. Given social media, which allows anyone to say anything to everyone, we need to educate ourselves and our children to think critically about everything we see, read and hear.

Harun would say, we needed—and lacked—this skill back in the age of mass media, too. Yes, the power to speak to large audiences was more concentrated. (You needed a broadcasting license, a TV station, a radio station, a newspaper or a publishing house—and not everyone had those.) But that concentration of power didn’t necessarily make the messages these media machines churned out more trustworthy. That was our perception—and, arguably, our naïvete.

(Here’s a personal anecdote to think about. Back in the mass media age, when I lived in China, a highly educated Chinese friend of mine argued that Chinese audiences were far more media-savvy than Western audiences. They at least knew that everything they saw, read or heard from mass media had a specific editorial bias. In China, you didn’t pick up the daily paper to read ‘the news’; you picked it up to infer the Communist Party’s agenda and priorities.

Now, to be fair to the ‘Westerners’ among us, we were never completely naïve consumers of mass media. I knew that the New York Times had its bias, which was different from the Wall Street Journal. But it’s also true that I never saw, nor enquired into, the editorial process of either. Who decided which stories were newsworthy and which weren’t? Exactly what agendas, and whose agendas, were being served by the overall narrative?)

The point is, even in the truth era, truth was something manufactured. And ‘The Truth Machine’, as Harun and his colleague call it, had three parts: experts, numbers and mass media. When orchestrated together—the experts say, the numbers show, the news reports—these three sources of legitimacy could turn almost any message into ‘truth’.

Throughout the 20th century, governments all over the world used The Truth Machine to dramatic effect: policy priorities were fed in one end, and popular support came out the other. (For CIA history buffs, Harun gives a great example from the 1950s. In 1950, Guatemala overwhelmingly elected a new president who promised to wrest control of the country’s banana-based economy back from the United Fruit Company (an American corporation that owned all the ports and most of the land) and return control to the people. United Fruit and the U.S. government deployed experts, swayed journalists, staged events and made up facts to help the American public reframe the situation in Guatemala as a communist threat to American values and democracy. The new Guatemalan president had no links to the Soviet Union, yet when the CIA helped to remove him via military coup in 1954, public opinion in the U.S. held it up as another victory in the Cold War.)

Businesses, too, have been using The Truth Machine for decades to wrap commercial messages in the legitimacy of ‘truth’. A serious-looking dentist in a white uniform (expert) advises us to use Colgate toothpaste. Mr Clean bathroom cleaner kills 99.9% of bacteria (numbers). And we’re shown these advertisements over and over again (mass media).

2. The Funhouse

The more accurate way to think about our ‘post-truth’ problem today, Harun argues, is not that ‘real news’ has suddenly become drowned out by ‘fake news’. Rather: whereas once there was only one, or very few, Truth Machines operating in society, now there are many. And they’re working against each other, spewing out contradictory truths. The Truth Machines themselves have become a contradiction, since the more competing truths they manufacture, the more they undermine public trust in the authority of numbers, experts and mass media.

We cannot simply trust convincing-looking numbers anymore, because we are now bombarded with numbers that look convincing. We cannot simply trust experts anymore, because we are now bombarded by experts telling us contradictory things. We cannot trust mass media anymore, because mass media is just full of experts and numbers—which we know we can’t simply trust anymore.

The Truth Machine is broken, and so it’s like we’ve gone to the amusement park and stepped inside ‘The Funhouse’—another great metaphor, courtesy of Harun and Nilufar. In the truth era, we assumed that individuals would read and listen to different messages and make a rational choice between them. Now, multiple, contradictory truths create so much confusion that individuals start to doubt everything, like in a hall of mirrors. People think, ‘There is no way for me to know what is objectively true anymore.’

3. The Group

What we all need is a new source of sincerity. And we’re finding it: within our social reference group. It’s our first and last refuge of belief and principle about what is true and what is untrue. Rational analysis has become unreliable, so we are reverting to our oldest strategy for making sense of our world.

Groups as trust machines

The simple fact is that we are social animals. And so ‘groups’ are a real, natural, organic part of our lives. Social science is full of simple experiments, going back to its beginnings, that demonstrate how our group influences how we as individuals think and behave.

(One of the oldest and simplest experiments was conducted in the 1930s by one of the founding fathers of social psychology, Muzafer Sherif. He put participants alone in a completely dark room, except for a single penlight at the other end. He asked each person to estimate how much the point of light moved. (In fact, the light didn’t move at all, our eye muscles fatigue and twitch whenever we stare at something long enough, and those twitches cause us to see movement where there isn’t any.) Individual guesses varied widely, but once the participants got together, those whose guesses were at the high end of the range reduced theirs, and those whose guesses were at the low end raised theirs. Take-away: The group norm becomes the frame of reference for our individual perceptions—especially in ambiguous situations.)

The same technological forces behind the breakdown of The Truth Machine are also behind the rising power of groups. Organic social groups can form more easily now—around shared passions and experiences—than was previously possible. Small, scattered communities of interest have become global networks of like-mindedness. Coordinating messages and meetups, once expensive and difficult, is now free and frictionless. And social groups can filter more easily now, too, creating echo chambers that reinforce opinions within the group and delete dissonant voices.

Making Group Truths

While some of us bemoan the ‘polarization’ or ‘Balkanization’ of public opinion, some influencers—politicians, advertisers—are simply shifting strategies to better leverage this re-emerging power of group trust. More and more influencers are figuring out that, although the old Truth Machine is broken, a new ‘Truth Machine 2.0’ is born. In this post-truth era, a manufactured message can still become trustworthy—if it reaches an individual via a group.

In fact, this new Truth Machine generates more powerful truths than the old Truth Machine ever could. There was always something artificial about the truths that the old machine manufactured; they came at us via those doctors in lab coats and news anchors sitting behind their news desks and pretending to scribble notes behind their news desks. But these new truths come at us organically—with fewer traces of the industrial process that spawned them.

Harun points to the ‘Pizzagate’ episode during the 2016 presidential election—maybe the wildest example of the power of this new-and-improved truth machine. Stories had circulated on social media that Hillary Clinton and other leading Democrats were running a child trafficking ring out of a pizzeria in Washington, DC. In December 2016, one proactive citizen, a 28-year-old father of two, burst into the pizzeria with his AR-15 assault rifle to free the children. He fired shots at the fleeing employees, then searched for the children. He became confused (and surrendered to DC police) when he didn’t find any.

The mainstream media debunked the child-trafficking story—which, for some, only confirmed its truth. According to public opinion polls at the time, 9% of Americans accepted the story as reliable, trustworthy and accurate. Another 19% found it ‘somewhat plausible’.

Is that a lot? I think it is: with almost no budget, no experts, no analysis, no media agency, an absurd fiction became a dangerous truth for millions of people.

Marketing Group Truths

Harun’s book with Nilufar is aimed at businesses—to help marketers rethink marketing in an age when the public has lost trust in conventional messengers. And this age does demand a fundamental rethink of the marketing function. In the industrial era, business broke consumer society into segments. We were ‘soccer moms’ and ‘weekend warriors’, ‘tech enthusiasts’ and ‘heartland households’. These segments weren’t organic. They weren’t real groups that its members identified with. They were artificial, rational constructs meant to lump together people with shared characteristics who would perceive the same message similarly. And they worked, so long as The Truth Machine worked.

‘Group marketing’ (a deceptively simple term that holds deep insight) accepts that experts, numbers and mass media are losing their authority to sway our choice-making. We just don’t trust these mass-manufactured truths anymore. But we do trust our group(s). And so, more and more of our buying decisions are based on the logic, ‘I’ll buy this because my group buys it.’

Within this growing phenomenon, Harun and Nilufar have clarified an important new rule in how to create successful brands. It used to be that a company had a Product, attached a Story to that product, and this P+S became a Brand that people Consumed. P+S=B, and B –> C.

Group marketing demands a new equation. The stronger the corporate Story, the less freedom groups have to tell their own stories with a Product, and the less useful it is to the group as an expressive device. So the goal is to get the Product into the Group’s hands with a minimum of corporate storytelling. Instead, let the Group build the Brand as the sum of its members’ Individual Stories. Harun and Nilufar compiled several successful examples, my favorite of which is how Mountain Dew infiltrated skateboarding groups in Colombia. (Look for this tactic, and you start to see it everywhere…)

Truth As A Disease

To repeat myself: more and more of our buying decisions are based on the logic, ‘I’ll buy this because my group buys it.’

What worked for Pepsi’s Mountain Dew product also worked for Cambridge Analytica’s political messaging. Ideas were manufactured, planted into groups, and accepted by group members as truth because the ideas came to them via the group.

This is where business and politics differ. Businesses can adapt how they persuade consumers to buy things to this new group-centric approach, and the economy will still function fine. It’s less clear that we can say the same about our politics.

Liberal democracy isn’t built to operate on truth. It’s built to operate on doubt. Liberal democracy is an Enlightenment project from the Age of Reason. It assumes that truth cannot be known in advance (a priori, as the philosophers say). Instead, society must grope toward the truth by making guesses—and being sensitive to what the people find out along the way. Democracy is an exploration. It depends upon a shared commitment to discovery.

Now, thanks to all these competing Truth Machines, a pre-Enlightenment culture of truth is returning—and spreading. It is a blight that threatens the whole ecology of our political system. When too many people believe they have found truth, democracy breaks down. Once truth has been found, the common project of discovery is complete. There is no more sense in sharing power with those who don’t realize it. There is no more sense in curiosity, in new evidence.

Curing Ourselves Of Truth

To rescue the possibility of groping toward Paradise democratically, we need to inject our own group discourses with doubt.

I don’t know how we manage that feat. (But I’m open to suggestions!) I only know that it’s the logical answer. If an idea is foreign to the group, the group rejects it. Therefore, only group insiders can introduce the group to doubts about its own shared ‘truths’.

Only Nixon could go to China.

And so (I bet you thought you’d never hear this one), the world needs more Nixons.

 

Map #26: Finding The Real In A Post-Truth World

Map #26: Finding The Real In A Post-Truth World

Our Shared Awareness Of Atomization

I’m guessing we all know the sensation of being detached, somehow, from the whole: when we catch ourselves in the act of reaching impulsively for our mobile phone and feel an idle guilt about our addiction to consuming content that somehow feels closer to junk food than vegetables; when we give meditation a try, find it helpful for some inexplicable reason…and then struggle to find the time to meditate again; when we get out of the city for a holiday, widen our vistas, and then feel oddly unfocussed for the first few days back at the office.

(I’ve just experienced the latter. This past week, I went cross-country skiing with an old friend in the Austrian Alps. At the top of a long uphill climb, we paused to catch our breath and take in the view. The air was perfectly still. The sky was a cloudless blue. The mountain peaks were a brilliant white. I closed my eyes and felt the sun warming my closed eyelids. I could hear a few birds singing in the surrounding forest; off to my right, i could hear pine needles crackling as they melted free of their snowy cocoons. I heard my breath. I felt it. For no particular reason, I was profoundly happy.

…and since returning to London, it’s taken me a solid two or three days of circling around my laptop to recover the focus I need to write.)

Enterprising minds have spotted our discontent with disintegration and turned reintegration into an industry. Grocery delivery services here in London emphasize, variously, ‘fresh’, ‘simple’, ‘organic’ or ‘mindful’. Meditation apps are booming. Yoga makes you balanced. Electric cars make you clean. To restore lost relationships — with our food, ourselves, our community, our environment, with the truth — has become one of the most compelling stories reshaping consumer behavior.

We shouldn’t be surprised that it has become one of the most compelling stories reshaping politics, business and society, too. Economists, sociologists, scientists, tech titans and politicians today all ply us with the need for, or the promise of, restoration. (Start to listen for it, and you start to hear it everywhere…)


An Autopsy Of Our Mind

A couple letters ago, I shared a brief scan of how different researchers across the social sciences today explain why society is disintegrating, and what to do about it. Every branch of social science offers part of the diagnosis, and part of the cure.

Their diagnoses all relate to the fragmentation that is happening ‘out there’, in the external world. But, as we’ve all experienced, the fragmentation is also happening ‘in here’. A deeper disintegration is underway, at the level of our consciousness.

This deeper disintegration is hard to research. It doesn’t yield data the same way that, say, economic inequality does. Yes, we can point to plenty of indirectevidence. The extreme cases show up in our public health statistics — rising rates of youth suicide (here in the UK, suicide is the leading cause of death among people aged 20–34), the opioid epidemic and other substance abuse and soaring numbers of mental health cases, for example. But we cannot cut open our minds to perform an autopsy; we cannot compare the brain of a youth twenty-five years ago with the brain of a youth suicide victim today and observe how that person possessed a greater sense of belonging-to-something than this person did.

Because this internal reality of disintegration is hard to show empirically, it’s hard for us to accept it as ‘real’. (Wherever we live, we’ve all witnessed the slow struggle for society to take mental illness seriously and to overcome the stigma that’s been attached to it.) And yet, it clearly is real. We’ve all felt it. We all know the behaviors, the hungers, that it can drive. We all know the fleeting bliss that a sense of reintegration can generate.


Getting Real

To better understand the disintegration that right now seems to be taking place between our own ears, I’ve been reading a book by Jean Gebser called The Ever-Present Origin. It’s basically a history of consciousness — a history of how different cultures throughout history have had different awarenesses (if that’s a word). It’s a thick book. It’s a dense book. I wouldn’t exactly recommend it, frankly, except that it’s one of the most important books in post-modern philosophy. I hesitate even to write about it, because it will take me several more years to digest. But it is mind-blowing. Like Yuval Harari’s Sapiens, but less accessible and more insightful.

Gebser (1905–1973) was a German philosopher and linguist, and he first published the book in 1949. It’s obvious that he was heavily motivated by the fresh scars of World War II and by the looming threat of all-out nuclear war, from quotes such as:

The present is defined by an increase in technological power, inversely proportional to our sense of responsibility…if we do not overcome this crisis, it will overcome us…Either we will be disintegrated and dispersed, or we must find a new way to come together.

and…

The restructuring of our entire reality has begun; it is up to us whether it happens with our help, or despite our lack of insight. If it occurs with our help, then we shall avoid a universal catastrophe; if it occurs without our aid, then its completion will cost greater pain and torment than we suffered during two world wars.

But, as anyone who invests years to write a book must, Gebser did possess some hope that a brighter future lay ahead:

Epochs of great confusion and general uncertainty…contain the slumbering, not-yet-manifest seeds of clarity and certainty.


Make Me Whole Again

Gebser’s hunch was that we can’t solve the disintegration that’s underway ‘out there’ without also solving the disintegration that’s underway ‘in here’. We won’t solve the external crises of fake news, or inequality, or political extremism, or ecological crises, without also solving our internal crises of anxiety, emptiness, self-absorption and confusion.

That’s because, for Gebser, today’s external and internal crises are two sides of the same mistake, namely that ‘we have conceded the status of ‘reality’ to only an extremely limited world, one which is barely one-third of what constitutes us and the world as a whole.’

In other words, the root of our disintegration today is that we’ve denied the reality of everything that could restore our sense of belonging, of integration, of harmony, with our selves, each other and the world.

A Brief History Of Consciousness

The one-third of reality which we do accept as real is the mental. This is the reality of measurable space-time; of measurable cause-and-effect; of time broken into past, present and future; of calendars, goals and project plans; of Cogito ergo sum, I think therefore I am.

And the two-thirds that have gone missing? They are older, earlier aspects of our consciousness that we dismissed in order to give primacy to our modern, mental awareness.

The first, Gebser calls ‘the magical’. The magical is the spaceless, timeless oneness that I sensed last week in the Austrian Alps, or whenever we still gaze up at a starry night, or pray in a crisis moment, or whenever we lose ourselves in the beat of the music that’s playing. Nowadays, we are deeply suspicious of anything labelled ‘magic’. But there was a time in human pre-history when everything in our awareness was magical. We had no notion of using measurable space and time to separate cause and effect, and so everything that happened seemed connected to everything else. Rain dances made rain; curses punished wrong-doers; an arrow drawn on a cave painting ‘killed’ the buffalo before the hunt even began. In the magical phase of human consciousness, reality was one big unified thing within which we must listenin order to survive. (I hear, therefore I am.)

The second aspect of reality that we’re missing today, Gebser calls ‘the mythical’. Mythical consciousness first began when we discovered that the oneness of nature was, in a lot of ways, more like a circle. Natural events recur, rhythmically. Once this awareness of ‘recurrence’ became part of our reality, reality became, not just a oneness, but a polarity: day and night, summer and winter, birth and death, yin and yang. We became aware that the polarity of nature extended into us: the body and the soul. We began to weave events, objects and people together into stories that gave reality greater coherence — that made all the recurrences and balances fit together. We imagined ourselves as heroes in these stories; we imagined life as a hero’s journey; we shared collective dreams as a community. In the mythical phase of human consciousness, the world became a story in which we must speak in order to survive. (I speak, therefore I am.)

For Gebser, our third aspect of consciousness, the mental, emerged when we began to go off-script (around 2,500 years ago). Instead of finding our roles within the stories inspired by nature’s patterns, we began to ad lib our own intentions and journeys, by drawing instead upon something inside ourselves. Our mythical awareness of nature’s polarity was replaced by our mental awareness of a duality: us, outside of nature.

Once we stepped outside of nature, we could begin to direct our own lives. Time, which in the magical world had been one single big moment and in the mythical world had been a circle we traced over and over again, became the line (past, present, future) along which we played out our individual intentions. Time was now finite for us, and measuring time — conquering time! — began to matter. Space, which in the magical and mythical worlds had been irrelevant to the fulfillment of our lives, now imposed itself as a limit on how far we could go. Space became finite for us, and measuring space — conquering space! — began to matter.

If you grasp that last paragraph, then you’ve grasped the past 2,500 years of how our sense of ‘reality’ has been changing. In short: we’ve been getting better and better at measuring space and time, which (a) gives us more and more power to exert our own intentions over nature but also (b) draws us further and further away from the oneness of space and time that we used to know intuitively.

(To drive this point home, Gebser offers two seminal examples: the discovery of linear perspective during the Renaissance, and the discovery of space-time in the late 19th and 20th century — which is what got me re-reading Stephen Hawking. They’re fascinating examples, and I’ll digress into them at the bottom of the page, if you’re interested.)


Finding The Real In A Post-Truth World

Fast-forward to today, and Gebser’s history of human consciousness gives us a fresh lens for understanding the biggest changes underway today.

Take the mega-problem of post-truth politics. Why do once-powerful arguments based on facts and evidence suddenly seem powerless? For Gebser, this is a familiar pattern of exhaustion. As myth replaced magic, the power of magic spells weakened into mere bewitchment, and finally into empty rituals and superstition. As mind replaced myth, the epic explanations for everything became mere stories and entertainment.

And now ‘facts’ are becoming mere ‘alternatives’.

Our instinctive reaction (mine, anyway) is to leap to the defense of Reason. We must re-educate ourselves on how to think critically, how to recognize bias, how to apply logic and to be ruled by the knowledge that emerges from scientific methods. We must put wishful thinking and tribal tendencies back in their bottles — through heavy regulation, if necessary.

Except we can’t, Gebser would say. That is precisely the conceit that led to the shock of a President Trump and a Brexit vote (or, he argued in his own lifetime, to two World Wars).

Among American voters in 2016, Donald Trump won hearts, not minds. He didn’t give any reasoned arguments. He spoke instead in mythic terms about an imaginary America under siege. He held up tribal totems — the flag, guns, male aggression. In a recent New York Times piece, the columnist David Brooks bemoaned this neo-tribalism. Gebser would say: it has always been part of us.

The magical and the mythical are real, Gebser would explain, and that is the lesson that we need to take away from the shock events of recent years. Not real in the same way that we measure space and time, but real in our consciousness nonetheless. Modern, mental humanity gets very uncomfortable at the insinuation that reality has magical and mythical aspects. We deny the possibility. But, Gebser argues, that only makes us fools. ‘Those who are unaware of these aspects, fall victim to them.’

The resurgent power of magic and myth in society is a sign that our Age of Reason — the age of mind over everything — is reaching exhaustion. The project was flawed from the beginning, Gebser would say, because we can no more purge the magic and mythical from our reality than we can purge them from our language. Every time that we feel ‘disconnected’, or ‘unbalanced’, or feel anxious that we’ve ‘run out of time’, we betray our yearning to get back to the original oneness of space and time that’s now been completely carved up by rational thought.

In this moment of mental crisis, Gebser predicted, ‘soon we will witness the rise of some potentate or dictator who will pass himself off as a ‘savior’ or prophet and allow himself to be worshipped as such.’ (I’d say we’ve reached that point.)

But that prophet is false. He is, in Gebser’s words, ‘less than an adversary: he is the ruinous expression of man’s ultimate alienation from himself and the world.’ (Sounds about right.) He demonstrates that the latent, neglected power of magic and myth can still move us powerfully, but he does so by lashing out at our mental reality. In the end we’re left more fragmented.

The healthy response in this post-truth age can’t be to deny what reason has revealed to us. And it isn’t to purge magic and myth, either. (We can’t, and more to the point we shouldn’t, since doing so would also purge all emotion and inspiration.) Instead, Gebser thought, we need to ‘renounce the exclusiveclaim of the mental structure’ over what’s real, and reintegrate the magic and the mythic into our consciousness.

‘Like all ages, our generation, too, has its task.‘ It is to learn to see ourselves ‘as the interplay of magic unity and mythical polarity and mental conceptuality and purposefulness. Only as a whole person is a person in a position to perceive the whole.’


So…Where To Begin?

I’m going to be chewing on all this for a long, long time. But my immediate take-aways are these:

  • Trust our magic and mythic impulses more. These impulses are everywhere today in our consumer society, in art, in science. Even corporate executives have started talking about making their companies more ‘soulful’. At the same time, we hesitate to follow them, because we don’t understand the rational basis for these impulses. Well, all the above gives us that rational basis, in a meta-sort-of-way. So we should ‘go with them’, and feel more sure about doing so. So long as we bring our mental awareness along with us, we won’t slip into New Age pseudo-spiritualism. We’ll end up somewhere more real.
  • It’s time to get past gawking at the inconsistencies and ignorance of the Donald Trumps of the world. Their ignorance is irrelevant to their power, and it’s precisely that power that we need to understand and integrate — in a healthier way — into our politics.
  • In history, periods of general confusion and anxiety ultimately arrived at new clarity and certainty. The more ‘awake’ we can be to the conflicts inside us, the sooner we’ll all get there. I like this quote a lot: ‘Our sole concern must be with making manifest the future which is immanent in ourselves.’ That’s deep.

Thoughts?

 


How We’ve Conquered Space And Time

Gebser offers two seminal examples. The first was the discovery of linear perspective during the Renaissance — pioneered by the Italian artist Felipe Brunelleschi, and perfected by Leonardo da Vinci. Linear perspective creates the illusion of depth — of a third dimension — on a two-dimensional surface.

How could a new style of drawing be of historical importance? It makes no sense, until you try to imagine what it was like to try to conquer space without it. Space is three-dimensional. If you don’t have any way of communicating ideas in three dimensions, then space is difficult to master. No two-dimensional picture of the human anatomy can prepare a medieval doctor for what he finds when he cuts open a patient; he can only learn from cadavers — and his own experience. No two-dimensional drawing of a long-standing tower can explain to an architect how to build it; she can only mock-up a model, and hope that her real-life version stands the test of time, too. No two-dimensional drawing of a water wheel, or a clock, or even a knot, can reliably show a novice how to make one; he can only apprentice himself to a master and watch how it’s done.

But da Vinci’s drawings — for complex machines, for giant statues, for soaring bridges — can be followed, even centuries later, to bring his ideas into three dimensional reality.

Until we had a technology to reliably represent space, the reality of space was a sort-of prison that trapped our ideas. But with the advent, and perfection, of linear perspective, suddenly space became our prisoner.

The second example Gebser offers was the discovery of space-time in the late 19th and early 20th century. Basically, we figured out how to think about time as a fourth dimension of space. Mathematicians call these conceptions of 4-and higher dimensions ‘non-Euclidian geometries.’ Just as linear perspective helped us to measure and conquer space, our ability to represent time as ‘just another dimension’ improved our powers to measure and conquer time.

(When Stephen Hawking passed away recently, every newspaper in the world ran an obituary. None helped us to understand the significance of his most famous book, A Brief History Of Time. That book was all about trying to help the rest us understand how physicists think about time as a fourth dimension — and why being able to do so makes a whole new era of scientific progress possible: from nuclear power to mobile phones to quantum computers.