Map #37: (Whose) Future of (What) Work

Map #37: (Whose) Future of (What) Work

Says the Futurist: “The future is just over the horizon!” To which the social scientist replied: “Isn’t ‘the horizon’ an imaginary line that moves farther away as you approach it?”

A Useful Shorthand

On Thursday, I spoke at the Centre for Workplace Leadership’s 2018 conference on the “Future of Work.” For me, it was an opportunity to stop and think about a phrase that I’ve heard a lot these past couple years (and used a lot these past couple years!) but haven’t properly unpacked before.

It’s become a ubiquitous phrase on the lips of executives everywhere, in both the public and private sector. I’m not sure I would call it a “buzzword,” exactly (“buzz-phrase”?). To me a buzz-phrase—like, say, “systems thinking”—is “a concept that everyone agrees with but nobody can quite explain.”

The phrase “Future of Work” certainly attracts a lot of buzz. However it refers, not to a concept, but rather a list. A long, thorny list of work-related issues like:

  • Technological changes (especially AI and robotics), which are: eliminating some jobs entirely (e.g. truck driver); eliminating certain tasks within jobs (e.g. transcription); and creating new jobs with very different skill requirements (e.g. machine learning architect).
  • The emergence of “platforms” for matching people with jobs, from LinkedIn, to Uber, to Freelancer and Shiftgig and Upwork, which: change how and where training & recruitment happen; make it easier for freelancers and “digital nomads” to earn a living without having “a job”; and throw into doubt the whole notion of formal full-time contracts (after all: why hire a full-time employee when you can scale-up your staffing on-demand, on a project-by-project basis?).
  • The changing age structure of the workplace (at the entrance, the arrival of millennials and post-millennials into the workplace; at the exits, the elongation of people’s working lives into their 60s and 70s) — with resulting changes in workplace values and expectations.
  • Changing gender dynamics in workplace hierarchies (ranging from the ’#metoo movement to the mainstreaming of transgender identities).
  • And a host of smaller, but equally thorny, changes underway—many of them technology-driven. (For example, have you taken a look at the introduction of biometric monitoring devices into the workplace? Huge ethical questions here, but—so far—little discussion.)

So “Future of Work” has become a shorthand for saying: Look—here’s this list of work-related issues. It’s long and thorny, and we as individuals, organizations and societies need to think our way through it. And we need to do so because the “present of work” is still heavily influenced by our industrial roots—by factory culture, by command-and-control management styles, by an over-emphasis on measurable efficiency and an under-appreciation of important intangibles (like creativity, health & wellbeing, inclusion or a sense of purpose).

It’s a useful shorthand. Simply by invoking the phrase “future of work” in an executive setting today, you can get everyone around the table nodding soberly and agreeing that these issues matter, that we need to respond to them somehow, and that a very different relationship between organizations and their employees is just over the horizon. So, no, it’s not a buzzword. It’s a rich and meaning-full phrase.


Shortcomings

But like all useful shorthands, this one, too, has its shortcomings.

Language is like a map we use to navigate the world. And geographers will tell you: no map is value-free. No map is a 100% objective description of the territory. What do we choose to put at the center of our map? At what scale do we draw the map? Which features do we include, and which do we omit?

It’s an inescapable conundrum at the heart of human social sense-making: in order to communicate something complex, we need to eliminate a lot of the complexity we want to communicate. And doing so involves choices—often private choices, that we probably didn’t talk much about in public before they were made. Some of those choices, we weren’t even aware of when we made them.

So, from time to time, we need to return to the raw complexity and the choices we made when we distilled that complexity into new language. We need to return to the territory we’re trying to talk about, and refresh our awareness of what we’ve simplified away from the conversation.

How do we elicit the shortcomings of our shorthand? A good place to start is to trace the language we’re using back to its origins.


The history of the Future of Work

A quick Google Trends analysis tells the life story of this term. It popped briefly into common parlance (‘common search-lance’?) in October 2004. (I haven’t yet figured out what event might have caused that spike; if you have a theory, please share it.) But its recent climb into popular lingo began only in late 2013.

Why then? I have a hunch. In September 2013, Cary Frey and Michael Osborne at the Oxford Martin School published a paper called The Future Of Employment: How Susceptible Are Jobs To Computerisation? In it, they analyzed the entire U.S. labour market, job-code by job-code, and concluded that 47% of all present-day jobs in the U.S. were at high risk of being automated away by 2050.

47%. It was the sort of number that made people sit up and take notice.

That single paper has now been cited by academic researchers 2,817 times (or, about 2,800 more times than my doctoral thesis). But it’s also been cited tens of thousands of times (with widely varying accuracy) by media, pundits and the “commentariat.” (Subsequent papers by other researchers have tweaked the methodology but basically all arrived at the same conclusion: robots are coming to steal a lot of people’s jobs.)

In 2013, the idea that “machines will steal our jobs” was hardly new. Twenty years earlier, in 1994, Stanley Aronowitz wrote The Jobless Future. He was one of many thinkers at the time who looked at the emergence of networked computers (i.e., the Internet) and thought: If computers start “talking” to one another directly, that’ll have big implications for the people whose job it is to pass data around society.

And the wider question—of technology’s detrimental impact on work, jobs, and human behavior—is aeons old. In Ancient Greece, Socrates bemoaned the spreading technology of writing. (It would lead, he predicted, to a loss of memory, to more passive forms of learning, and “to endless disputation, since what one writer has written, another can challenge, without either of them meeting and arguing the issue to a conclusion.”)

What was new, from about 2012/2013 onwards, was the apparent rate of progress made by computer scientists in building systems that could do “pattern-recognition”: image recognition, facial recognition, natural-language processing and so on. “Some people would argue we’ve made more progress in those systems in the last 5 or 6 or 8 years than we’ve seen in the last 50 years.”

The recent, sudden acceleration in our computers’ pattern-recognition powers is due to three big factors:

  • The amount of computing power that we now have available to throw at these problems, thanks to the latest CPUs central processing units and GPUs graphics processing units, and on-demand processing in the cloud
  • The amount of data (and cheap data storage) that we now have available to train computer algorithms, thanks to the billions of pictures and voice streams and digital transactions that we all generate each day, every day, of our lives
  • The development of new pattern-recognition algorithms and techniques that take fuller advantage of all this computing power and data. Supervised and unsupervised machine learning, deep learning, convolutional neural networks, recursive neural networks. These phrases mean very little to people outside the AI research space, but within this space they represent a global flurry of research, experimentation, progress and big money. (For a handy AI primer, see this earlier letter.)

A computer that can’t do anything until you explicitly tell it how to do something feels like a tool. A computer that can look over your shoulder, watch the patterns (i.e., tasks) you perform, and then perform the same patterns—more reliably, more precisely, without food or rest—feels like a replacement. Especially when it proves able to identify patterns in your own behavior that you yourself didn’t know existed.

Stitch enough pattern-recognition systems together, and you start to get driverless cars and autonomous financial traders—systems that can actually do something in the physical or real world without our (human) involvement.

And so, people started to worry about what’ll be left for human beings to do, once this technology spreads.

The history of the term “future of work” suggests that the “center of the map” has, from the beginning, been automation: this accelerating trend of software and machines taking over many of the jobs and tasks that are currently being performed by people. (A quick Google Image search is enough to confirm that this is still the case. The first page of results is full of our hopes and fears from automation.)


Who drew the map?

Automation is the mountain at the center of “the future of work.” In the shadow of this mountain, several other challenges to how organizations presently organize the workplace have been identified and drawn in—like the new platform-marketplaces that force organizations to rethink how they hire, train and retain employees, and collaborate with outside talent; like the widening range of ages being brought together to work on the same project; like the social media-spotlighting of pay- and gender-inequities in the workplace; like the growing tension between the organization’s power and incentive to find any patterns in every aspect of all its employees’ behavior versus each employee’s right to privacy.

When you step back and look at it, what’s interesting is that so much of the map is being drawn—so much of our thinking about the future of work is being done—from the organization’s perspective.

This makes total sense, for two reasons. First, inside organizations is where most work was done during the Industrial Age, and where most work is still being done now. And second, managers are the people in society who have the most time to think about these things. In fact, they’re the ones being paid to do so.

But this same reasoning also suggests that mapping the future of work from the organization’s perspective makes no sense at all. Or at least, such a map is unlikely to prepare us for some of the biggest features of that future landscape. Because one of the biggest differences between the present world of work and the future world of work may just be how much work won’t happen inside formal organizations at all.


Management, not Markets

For most of history, human’s haven’t worked inside organizations. Even today, it’s a bit strange that we do. After all, we live in market societies. We’ve built our whole economy on the idea that an open market of buyers and sellers, haggling with each other to agree upon a price, is the best way for society to allocate resources and to organize production of the stuff we all want and need. “Why then, do we gather inside of organizations, suspend the market and replace it with something called ‘management’?” as my friend David Storey at the consulting firm EY so elegantly put it to me.

In 1937, the Nobel Prize-winning economist, Ronald Coase, explained this strange behavior by introducing the now-familiar idea of “transaction costs.” Figuring out mutually agreeable contract terms with each other every time we needed to cooperate to get something done would cost a lot of time and money. In theory it might work; in practice it’d be impossible. Plus it would create a lot of uncertainty on both sides of every transaction. (Do I trust a freelancer to do a mission-critical piece of work—knowing that they can blackmail me just when I need them most? Does the freelancer buy a house near me, her employer, knowing that I might decide at any time to work with someone else?). Putting work inside organizations makes economic sense.

By now, we’ve come to appreciate that it makes social sense, too. We are social animals. Organizations offer a shared, cooperative structure that outlasts specific participants who come-and-go. And they offer a ‘campfire’ for collective story-telling and learning.


Markets, not Management

But today, these rationales are less winning than they used to be. Online, external platforms are proving that efficient, thriving markets can now be created for once-unimaginably small, rare or vital exchanges—from a single hour of zen garden design work to trouble-shooting a software company’s core product. External platforms for learning (Coursera, edX, Udacity, Degreed, etc…) boast millions more users than any in-house training department ever could, and they can therefore mine better insights (via pattern-recognition) to create better learning pathways for learners.

Whether the incoming generation of adults values organizations for their social benefits is also in doubt. In some developed-country surveys (and I’m sorry; I’m still trying to find the link for you!), up to a third of today’s high school students say they’d rather be full-time freelancers than full-time employees. (In the same breath, it’s worth noting that loneliness, isolation and depression are also on the rise among young people. How will youth negotiate seemingly competing needs for freedom and belonging in the “future of work”? Big, open question.)

And yes, organizations remain excellent at retaining and transmitting learning and shared stories. But for the same reason, they’re poor at adapting. And during times of rapid environmental change, adaptability is a must-have survival skill. (Fun stat: In 1935, the average age of companies listed on the S&P500 was 90 years; today, it’s just 11.)


The future we can see, and the future we can’t

The “future of work,” which as a useful shorthand has helped organizations to accomplish 5 years of intensive, important reflection, rethinking and redesign, now needs to come to terms with its own shortcomings. Namely, that it is an automation-centric picture of how the workplace is changing, drawn from the organization’s perspective.

It is, in other words, a conversation about the future we can see—the future that, from where we’re standing now, we know is coming.

In many ways, I think this is the more important, more urgent future for us to explore. It wasn’t so long ago that most of us looked to the future and thought (or were told) that the European Union was inseparable, that Trump was unelectable, that globalization was irreversible, that China’s democratization was inevitable and that facts were incontrovertible. We failed to see a lot. As the British philosopher John Gray put it, “It wasn’t just that people failed to predict the global financial crisis. It wasn’t just that people failed to predict that Trump would become the U.S. president. What’s really sobering is that, for most of us, these things weren’t even conceivable. So we need to ask ourselves: What are we doing wrong, that we are unable even to conceive of the big changes that will transform the world, just 10 years down the road?”

Part of the answer, I think, is that whenever we explore “the future,” yes we might shift our time horizon, but we don’t often shift our point of view.


A people-centered perspective

When Copernicus proposed his sun-centric theory of the solar system, he was describing something (a) that he couldn’t possibly see and (b) for which he had no data. (Sort of like trying to describe the future.) Nonetheless, he was convinced that his sun-centric perspective was the right one, because his new map of the heavens was more intuitive than the old one that people had been using for the past 1,500 years. That old map had grown head-scratchingly complex over the centuries. As astronomers’ measurements of planetary movements became more accurate, the geometry of their orbits had to become more complicated to fit within an earth-centric model of the universe. But once you flipped perspectives and looked at the heavens the way Copernicus did, a lot of that complicatedness just fell away.

David Nordfors makes a similar argument for shifting from an organization-centric to a people-centric perspective on the future of work. (David co-founded the Center for Innovation and Communication at Stanford University, and now heads up the i4j Leadership Forum. We met up in mid-November at a private gathering of 100 “thoughtful doers” that I convened in Toronto.)

For the long version of David’s argument, I commend to you his recent book, The People Centered Economy. Here’s a brief flavor of why such a shift in perspective makes sense intuitively:

In an organization-centered economy, people are sought to perform valuable tasks for the organization, but the people who perform the tasks are seen as a cost.

In a people-centered economy, tasks are sought that make people’s labor valuable.

In an organization-centered economy, innovation (especially automation) presents a social problem. Automation makes it possible to do valuable tasks without costly people. Some people may lose their ability to earn a living entirely.

In a people-centered economy, innovation and automation present a social opportunity. Automation frees people up to do other tasks. AI helps people to find those other tasks more easily—other tasks that better fit their abilities and feel more meaningful to them. Organizations seize the opportunity to invent new human tasks and tools and match people with them, which can help people earn more, and feel happier, than was possible with the old tasks and tools.

In an organization-centered economy, corporations face a paradox. Each corporation is incentivized to reduce their wage expenses so that they can increase profits. But if enough corporations successfully do so, their consumers earn less money, spend less on their products, and corporate profits fall. (In the macro economy, a dollar of labor costs saved is also a dollar of consumer spending lost.)

In a people-centered economy, this paradox falls away. Corporations are in the business of creating opportunities for people to spend money and to earn money. Some people spend money to consume the corporation’s goods and services. Other people earn money by performing the corporation’s job-services.

If all this sounds a bit far-out, that’s a good indicator that—maybe—we’re starting to glimpse that elusive “future we can’t see.” But it’s also a rough description of eBay, Etsy, Uber, Airbnb and many other smaller two-sided platforms today, whose business model is already about serving buyers with ways to spend money and serving sellers with ways to earn money.

So it may sound far-out, but it may not actually be that far away. In his book, David offers an example of how organizations in the near future might reframe a job posting as an “earning service” instead:

Dear Customer,

We offer to help you earn a better living in more meaningful ways. We will use AI to tailor a job to your unique skills, talents and passions. We will match you in teams with people you like working with. You can choose between different kinds of meaningful work. You will earn more than you do today. We will charge a commission. Do you want our service?”

As David summarizes, ‘This is a service that everybody wants but almost nobody has.’

But they will, and soon. I’m personally familiar with several efforts already underway to build businesses that make precisely that offer to people. One of the best I’ve seen so far is FutureFit.ai, which gets people to declare where they want to go professionally and then uses AI to plot them a personalized journey (via study, learning and work opportunities) to get them there. “Google Maps for the future of work and learning,” is how their founder, Hamoon Ekhtiari, sums up his vision.


Here and now, just a (beautiful, lucrative) possibility

Like Copernicus in his day, it’s impossible to prove that this alternative perspective on the future of work is “right.” (Copernicus published his sun-centric theory in the early 1510s, and it wasn’t until Galileo pointed a telescope heavenward a century later that anyone had hard evidence to support his paradigm shift.)

But, like Copernicus’ new model of the heavens, a people-centered model of the economy is more intuitive. It resolves the head-scratching paradox that today’s businesses are being incentivized to automate away the consumer spending power upon which their profits depend.

And it’s more beautiful. David quotes Gallup’s chairman Jim Clifton, who estimates that in the present world of work: 5 billion people are of working age; most of those people want a job that earns them a living, but only 1.3 billion people actually have one; and of those 1.3 billion people, only about 200 million people actually enjoy their work and look forward to doing it each day.

Jim’s numbers suggest that humanity’s $100 trillion global economy is running at only a fraction of its capacity. How much more economic value could we collectively generate if we used AI and automation to connect more of the world’s 5 billion workers with learning and work that matched their talents, passions and sense of purpose? How much happier would we collectively be?


Keeping an eye on the future we can’t see

Fundamental shifts in how society looks at things—like work, or health, or wealth, or education…—don’t happen overnight, or all at once. And they’re rarely total. Paradigm shifts are a messy, social process. Multiple paradigms coexist for a long time, until the new paradigm reaches an invisible tipping point and simply is the way that most people think.

In an organization-centered economy, innovation is about coming up with new tasks that machines can do, and new products and services for people to consume. But in a people-centered economy, a lot of innovation will also focus on coming up with new tasks that people can do to earn a better living.

Innovation in this people-centered vein is already beginning. Expeditions aimed at reaching a people-centered future of work have already set forth—in a few markets, with a few startups, in nascent ecosystems. These efforts are not purely altruistic; there are vast profits to be made. That’s why we can be reasonably sure that these efforts will continue, and expand.

There’s gold to be found. Someone’s going to strike it. And then there’ll be a rush.

Bearing in mind (with all the humility gained from the last decade of political, economic and technological shocks) that preparing for the future we can’t see may be even more important than preparing for the future we can see, there are three questions that I think maybe can help us to keep an eye on these people-centric possibilities:

  1. For us as Individuals: How can we create more alignment between what we ourselves deem valuable or important and what we do to earn a living?
  2. For us as Organizations: How can we support individuals in making those changes?
  3. For us as a Society: How can we invite excluded populations into that personal search for alignment between work and value? (e.g. the unemployed, people with “disabilities,” people doing unpaid work (child/elderly care), children in school, the elderly?)

Because that, I think, is what we all really want the Future of Work to look like.

Map 36: Markets vs Morals (Or, The Unstoppable Force vs The Unquenchable Fire)

Map 36: Markets vs Morals (Or, The Unstoppable Force vs The Unquenchable Fire)

I was on stage at the Oslo Business Forum earlier this week. (Email me if you’d like a copy of my slides.) The day opened with CNN anchor Richard Quest and also included Andrew McAfee from MIT, but the star attraction for the 3,000 business leaders who came to the one-day event was Barack Obama. (He’s on a four-day swing through the Scandinavian lecture circuit this weekend.)

My main message to this audience was that if we want to really understand the driving forces behind everything that’s going on in the world today, then we need to grasp two things, simultaneously: the realities and the possibilities of the present.

By focusing on the realities of the present, we see the unstoppable forces that are transforming the economy and society. Forces like automation. The opportunities to replace humans by machines and algorithms are multiplying rapidly for every employer. The “returns on investment” and “payback periods” (key metrics for any investment decisions) were already attractive. Now, they look so good that it is irrational not to automate. (Here’s a concrete example I know: a large bank recently took a single business process that employed 51 people and eliminated half of those jobs through a combination of chatbots, robots and machine learning. Within just seven months, the labour cost savings paid for the technology investment.)

The incentive that each individual company faces to automate whatever it can is, at the level of the whole economy, obliterating well-paying “middle class” jobs. These are the jobs that once made it possible for people without advanced degrees to still enjoy a “middle-class” lifestyle (i.e., to buy a good house and put the kids through school). Many of us resist this trend, but the reality is—to a large extent—it has already happened.

That’s just one of the realities of the present.

If instead we focus on the possibilities of the present, we see evidence of humanity’s unquenchable fire. We see evidence of our willingness to reach beyond what is, maybe to risk everything, in order to achieve some new, intangible, higher state of justice, or goodness, or equity, or prosperity. To blow up how society thinks and what society values in a glorious collision with the unstoppable forces.

We can see this fire today in the clashes between female empowerment and male privilege, or between “the traditional” and “the modern” family. In the contest between isolationism and globalism. In the geopolitical fight to defend democratic chaos or spread authoritarian orderliness. We can see this fire in the battles between private ownership and public regulation of technology platforms. In the contest between my right to amass wealth for myself, and the social movement to guarantee a minimum income for everyone. And maybe, most fundamentally, in the contest to define what’s real: enlightenment values of collective reason on the one hand, faith-in-the-strongman on the other.

The single biggest question of our time is simply: What happens when the unstoppable forces meet the unquenchable fire?

And I think the answer is leadership. Leadership is what happens. And by leadership, I mean the courage to stand in the midst of this collision and try to figure out what to preserve and what to reinvent. What to accelerate and what to annihilate. (I then went on to lay out a “leadership manifesto.” I won’t bore you with it here. Email me if you want a copy—but then I’ll ask you to critique it with me.)


What Money Can’t Buy

One of my inspirations for telling this story about economic realities vs moral possibilities was another book by Harvard philosopher Michael Sandel I read this past summer: What Money Can’t Buy: The Moral Limits of Markets (2012).

In short, it’s a book in which Michael notices that money and markets have now penetrated many areas and activities of society where, previously, they didn’t belong. His examples range from the small: amusement parks, where premium passes are now sold that permit you to jump the queue (“Cut to the FRONT at all rides, shows and attractions!”); scalping tickets for campsites at Yosemite; and giving ghost-written toasts at your best friend’s wedding. To larger examples: paying drug-addicted women cash incentives to undergo sterilization or long-term birth control; public programs that pay kids who raise their test results in school; selling permanent residency or citizenship to foreign investors; or selling pollution permits and carbon offsets, i.e., selling the right to indulge in pollution.

Many of our moral choices have now been converted into market exchanges. Perhaps this is a good thing. After all, the market is an efficient way of allocating society’s resources. Many things in society—from Yosemite campsites to hospital beds to residency visas—are scarce, so the question becomes who should get them? The market is one way of answering that question, by holding an endless auction that distributes them by willingness to pay.

Crude Logic

Or perhaps, our moral choices have not just been converted, but downgraded. This is Michael’s view. First, he challenges the notion that the auction-logic of the market yields an “efficient” outcome for society as a whole. I think back to my day at Wimbledon this summer. Some of the best seats, which had sold for the highest price, were empty. Why? Because the people who had bought those seats didn’t value them enough to be there on the day—unlike the thousands who had been queueing outside since the day before. Maybe society would have been better served if those seats had been sold at a pittance to youth who might take inspiration away from watching champions play.

Whenever we use markets to solve the problem of who gets what, Michael argues, then we need to be on guard for two new problems. The first, obviously, is inequality. “The more money can buy, the more affluence (or the lack of it) matters.”

The second problem is that we run the risk of corrupting the thing itself. If we pay kids to get better grades, are they internalizing a love for learning, or are we training their brains to respond to external incentives? If citizenship is sold to wealthy foreigners, do they approach their new community with a citizen’s sense of duty and responsibility—or with a property owner’s sense of entitlement? (At an Oxford alumni forum last week, I coined the (awkward) word “crudification” to describe this idea. If I let Dominos tattoo its logo on my body in exchange for free pizza for life, I don’t just monetize, I crudify, my nature as a unique human being. (And yet…tempting!)

Crude Conversations

Said the Harvard economist Greg Mankiw: “There is no mystery to what an ‘economy’ is. An economy is just a group of people interacting with one another as they go about their lives.” When we convert our choices from moral logics into market logics, we are changing the nature of our interactions with one another.

The broadest implication of this trend, Michael thinks, is that our public discourse is being bled of moral content:

The problem with our politics is not too much moral argument but too little. Our politics is overheated because it is mostly vacant, empty of moral and spiritual content. It fails to engage with big questions that people care about.

(I wonder if Michael watched the Brett Kavanaugh circus on Thursday…)

Take immigration—one of the most heated topics in our politics today. Anti-immigrant protestors cast their objections in utilitarian terms: safety, security, jobs. Immigration advocates do the same. So the debate over whether to build walls or windows around our society is crudified (there’s that awkward word again!) down to a debate over what either choice would mean for incidents of violent crime, for wages and unemployment, for taxes paid versus welfare benefits consumed.

But those are very different debates from the moral logic that, for example, graces the Statue of Liberty:

Give me your tired, your poor,

Your huddled masses yearning to breathe free,

The wretched refuse of your teeming shore.

Send these, the homeless, tempest-tossed to me,

I lift my lamp beside the golden door!

It’s true that taking in more, or fewer, refugees has labour-market consequences. But why does so much of our public debate focus on what these consequences are? Wouldn’t a healthier—and certainly, richer—public debate invite the many other dimensions of this question? What happens to “our” culture when more or fewer “outsiders” come in? If we widen our sense of “we” to include “them,” does that somehow make us better (the cosmopolitan’s view of the world) or does that somehow make us confused and corrupted (the nationalist’s view of the world)? And: Do we have a moral responsibility toward refugees that outweighs these profound cultural questions? If so, from where does that responsibility come: our faith? our common humanity? enlightened self-interest? And if so, what are the limits of that responsibility? How do we balance our needs against the needs of the “homeless, tempest-tosssed”?

Crowding Out Morality

The “marketization of society,” Michael thinks, is to blame for the growing absence of these sorts of conversations from public discourse. The immigration debate is just one instance of a society-wide falling out of the habit of moral reasoning. As the share of our interactions with one another that take place in the market grows and grows, market logics become our go-to argument for why we should, or should not, do anything.

And once market logics enter our conversations, moral logics tend to get crowded out. One of the appealing features of markets is that “they don’t pass judgement on the preferences they satisfy.” If you are willing to sell X and someone else is willing to pay your price for X, does it matter what X is? That’s your business. (Subtext: market logics apply.) Who is anyone to judge? (Subtext: moral logics don’t.)

In this way, the market has become an instrument for advancing personal freedom against conventional constraints. And maybe that is why the expansion of the market into more and more parts of our lives seems inexorable. Seems to be an unstoppable force. Because the market not only competes with our conception of “the good.” It has become part of our conception of the good.

(As an aside, it’s worth remarking how persistent the appeal of the market is. September was the 10-year anniversary of the collapse of Lehman Brothers and the global financial crisis. If you want to explore (a) the spectacular scope of that market collapse and (b) how little it did to dampen our faith in the power of markets to steer society in the right direction, I recommend my friend Ian Goldin’s fantastic new BBC five-part radio series on the financial crisis, After The Crash. A must-listen.)


An Unquenchable Fire Comes To London

I have trouble imagining how to practically do what Michael argues we must do for the wellbeing of our society: namely, to arrest the monetization of everything, to rollback market forces, and to resurrect a wider role for morality in how we interact with one another.

Fortunately for me, my friend Professor Dr. Alejandro Jadad has been in London this week, and we had a chance to meet up again. Alex has a lifetime of experience standing in the path of unstoppable, “cartel” forces—from his childhood in Columbia to his present efforts to upend the global healthcare industry.

Alex has held two distinguished Canada Research Chairs (“a Canada Research Loveseat,” he jokes), is the founding director of a global e-health innovation centre at the University of Toronto, and has more honorary letters behind his name than I have letters in my name. He is enormously successful, along every conventional dimension of success. And he’s a radical. “I am fearless in my beliefs and not afraid to die for them,” he told me on Friday as we waded, mischievously, through the reflecting pond at the V&A Museum. I believe him.

That combination makes him dangerous. “Those who challenge society’s current models put themselves at risk every day. They are harassed all the time,” he said in an interview a few years back. But Alex is one of those people who refuses to be cowed into silence and, given his conventional successes in a world that equates success with credibility, he cannot easily be ignored when he speaks.

If we’re looking for where and how to start pushing back against the monetization of everything, for Alex the answers are numerous and obvious. (I’m now hitting my word limit, so I’ll just briefly tease three of his biggest ones, then list some Further Reading if you want to dig deeper.)

1. “Development”

Alex has written:

I dislike the word ‘development’ because it was created, in the sense we use it today, in the 1940s as a means to emphasize the need to live like we do in North America or Western Europe. And so it divides the world between those who have material goods that money can buy, and those who do not. You need money for roads, you need money for houses, you need a car like mine. When you get enough money and get these things, and are able to live like me, then you become ‘developed’ like me. Until then, you remain ‘underdeveloped’.

How would our concept of “development” be different if market logics played less of a role in shaping our understanding of it? First, we would recognize other dimensions of abundance—of talent, of energy, of wisdom and other types of resources—that exists in almost every community in the world, regardless of how much money they have available. And second, we would recognize other dimensions of need—physical, mental, social. In short we would understand development in richer, fuller terms: as a process of moving towards human flourishing, and away from suffering. We would more fully harness our creativity and diversity as individuals or communities to achieve that end. And we would more fully recognize the possibility that a community with lower GDP per person could be more “developed” than a community whose GDP per person is higher.

2. “Philanthropy”

Alex, who like me is fascinated with the origin of words, likes to point out that “philanthropy” literally means “to love + human beings.” Philanthropy should, at root, be about giving love. But it’s not anymore:

Go to a modern dictionary. “Philanthropy” is described as the donation of money to good causes. The definition goes straight to money. Somehow the meaning of the word morphed into a transactional activity in which money is the main thing that is transferred, from a place of abundance to a place of scarcity. People or organizations or countries that have an abundance of money transfer that money to a group that is in deficit, with the assumption that money will make things right.

The original meaning of philanthropy has been corrupted—crudified—by monetization. Or, as Alex memorably puts it: “Yes, I might have money, but I have many other things, too. If as a philanthropist, I just give money, when I could be loving more—then I am doing a half-assed job.”

How would our concept of “philanthropy” be different if the market played less of a role in our understanding of it? We would probably blur the distinction between “philanthropy” and “volunteerism.” We might start to recognize that philanthropy, as a transfer from my abundance to your scarcity, is about so much more than money. We might start to recognize more kinds of affluence within ourselves—and more kinds of scarcity in others. And maybe we would miss fewer opportunities to share our affluence with those in need.

3. “Health”

“Health” is the field where Alex spends most of his professional time, and where he has achieved global eminence. In 2008 he started a global conversation on the meaning of health among his peers, sponsored by the British Medical Journal. His argument: “Health” has become a market good—something that we can possess if we can afford to pay for it. Along the way, we have sacrificed the notion of health as an ability to achieve wellbeing for ourselves. Another memorable Jadad-ism: “Our sense of health is being squeezed between the merchants of death (i.e., drugs, alcohol) and the merchants of immortality” (i.e., the healthcare industry).

How would our concept of health be different if the market played less of a role in our understanding of it? We would, Alex argues, once again see health as an ability that we possess and, just like other abilities, can develop. Even as we age. Even as we suffer from chronic illnesses.

We might also start to see death differently. In Alex’s mind, the market has so penetrated our notion of “health” that it has now also corrupted the nobility of death and dying. Death was once the ultimate human equalizer: an honour that was afforded equally to every human being. The French philosopher Michel de Montaigne (1533-1592) wrote that “Death is part of you. Your life’s continual task is to build your death.” But now it is the ultimate symbol of affluence or poverty—of how much time aboveground we each can afford.

Unstoppable Force Or Unquenchable Fire?

So: Who wins when the unstoppable force of economic reality meets the unquenchable fire of human morality?

As for Alex, he is a self-described “cheerful pessimist.” Pessimistic, because he knows better than most people how powerful these unstoppable forces can be. Cheerful, because, for him, happiness doesn’t depend on what other people choose to do. He invites people to act according to very different logics than pure market thinking would dictate. But he doesn’t expect them to.

“I only want to do beautiful, magical things,” Alex tells me quietly.

Will he? Will we?

That’s the big open question of the now.


Further Reading

Here are a few digital gems that Alex has hidden on the interwebs…

Map #35: Fake News—Or Honest Propaganda?

Map #35: Fake News—Or Honest Propaganda?

“I think Trump may be one of those figures in history who appear from time to time to mark the end of an era and to force it to give up its old pretenses.”

Henry Kissinger, Financial Times, July 2018

It’s been a long, hot summer. And I spent most of it far away from my writing desk—more time away than I thought I would, or meant to. Please forgive me!

It’s been time well spent. Replenishing the well. And I hope this finds you, well.

Smiles,

Chris


Old Pretenses, New Players

When I read that Kissinger quote over the summer, I wrote it down in my notebook. And I’ve been turning it over in my head. Love him or hate him, Henry Kissinger says a lot of things that make you think.

This quote rings true. There is a thick strand of “I’m just saying publicly what you’ve all been thinking and doing privately,” to many of Donald Trump’s public moments as US president. As in, when Bill O’Reilly on Fox News called Russian President Vladimir Putin “a killer,” and Trump responded with: “What, you think we’re so innocent?” Or when he bluntly proclaims that his foreign policy is “America first” and demands that other countries recognize the reality of American dominance in trade negotiations. Or when he openly manipulates domestic public opinion by publicizing lies, and shrugs off any sense of guilt or shame at having done so, because it’s all fake news anyway.


“Fake News” & Our Oldest Pretense

What is the “old pretense” that the persistent cry of “fake news” is asking us to give up? Nothing less than the central myth of liberal democracy. Namely, that a “public sphere” exists in which voters, who possess a certain degree of knowledge and critical thinking skills, take an interest in and take part in rational discussions. Why? In order to help discover what’s “right” or what’s “just,” guided by some inkling of the general interest. That’s why we need facts, why we need real news: so that we can exercise our responsibility, as citizens, to take part in this public sphere of discourse and deliberation toward rational judgments that serve the common good.

Uh-huh.

This pretense reminds me of the central myth in classical economic theory—that people are “utility-maximizing individuals.” Anyone who studies economics past the first-year introductory courses spends a lot of time reading about why that myth doesn’t describe how people really think and behave. This myth of how our democracy works doesn’t describe how voters really think and behave, either. It makes many strong assumptions about the personality of the typical voter: that he or she is interested in public affairs; that he possesses knowledge on questions of public interest and an accurate eye for observing the world; that she has well-formed moral standards; that he wants to engage in communication and discussion with people who think differently; and that he or she will do so rationally, with the community interest in mind.

Uh-huh.


Myth vs. Reality

The research shows—and the last couple years, surely, have proven—that’s not at all how today’s “advanced liberal democracies” function. The myth is that people on different sides, or in different situations, talk with one another. The reality is that most conversations of a political nature in society are confined to in-groups, to family, friends and neighbours.

The myth is that higher levels of “engagement” and “participation” in political discourse will yield a healthier democracy. The reality is that those who engage in political discussion more frequently tend to do no more than confirm their own ideas.

The myth is that voters who have not declared which party or person they’ll vote for in the next election are “undecided.” The reality is that these voters, who tend to fluctuate between parties, tend to know and care less than those who reliably vote one way or the other. “Undecided” is a euphemism. The label pretends that these voters are still deliberating. “Not altogether indifferent” would be more accurate. (The “altogether indifferent” voters don’t vote at all.) And the way you “swing” these voters, if you talk to any campaign manager, is not to appeal to their faculties of reason or policy preferences, but to treat them as consumers and advertise to them with the same tactics that motivate people to make a purchase decision.

The myth is that voting is the periodic, concluding act of a perpetual, rational controversy carried out publicly by citizens. The reality is that, for most voters, it is their only public act.

In a democracy, real, reliable news is supposed to matter, because public opinion, if it is to fulfill its democratic function, must first fulfill two conditions: it must be formed rationally, and it must be formed in discussion. And we can’t do either of those two things if our public sphere is full of people lying freely.

If the above paragraph were entirely true, “fake news” would be troubling, because fake news makes our rational discourse harder.

But more deeply troubling is that the above paragraph may be entirely false, and we are finally being forced to admit it. In a democracy, real, reliable news no longer matters, because the idea that public opinion is formed rationally, in controversy with fellow citizens, has long since passed into pure fiction. Instead, today, public opinion is something to be temporarily manufactured, on a periodic basis, to dress up our prejudices in rational argument, and in order to win a ritual-contest for raw power (i.e., an election), the result of which determines which group gets to oppress the other for the next few years.

These are the pretenses that come into focus for me—when I re-read Henry Kissinger’s quote, and when I think about the popularity of the phrase “fake news” today.

“I think Trump may be one of those figures in history who appear from time to time to mark the end of an era and to force it to give up its old pretenses.”

Henry Kissinger, Financial Times, July 2018


’Twas Not Always So

How did our public discourse come to this, with pretense and reality standing so far apart?

It’s helpful to bring a sense of history to the preoccupations of our present moment. (If you don’t like digressions, skip to the next section. 🙂 ) In academic circles, the man who literally wrote the book on the history of the “public sphere” in the democratic world is Jürgen Habermas (1929- ). According to Jürgen, you’d have to go all the way back to the 18th century to find a democracy in which real news actually mattered in the way we merely pretend it does today. Then, in England, France and Germany, you would have observed citizens getting together in salons and coffee shops, debating the latest opinion essays and newspaper reports, and reaching, through deliberation with one another, consensus, compromise and a settled opinion of where the public interest lay. This public sphere wasn’t a mere audience to information and ideas; it was the gauntlet through which ideas had to pass in order to enter public relevance. “There was scarcely a great writer in the eighteenth century who would not have first submitted his essential ideas for discussion in such discourse, in lectures before the academies and especially in the salons,” Jürgen wrote.

You would have also observed that these citizens were almost exclusively men, and property owners.

It was these “classical liberals” of 17th- and 18th-century Europe who introduced the modern ideal of rational, public discourse that our democracies still play-act today. For them, this ideal emerged as an alternative to the absolute power wielded by kings and queens. The problem was this: subjects, who were ruled by the crown, weren’t free. To be free, the crown’s power had to be taken away. But somebody had to rule. How could the people wrest absolute power away from the king, without creating another king in their midst? How could the people dominate and be free at the same time?

The classical answer to this conundrum was that reason, not man, should rule. It made sense. A law, to be just, had to be abstract. It had to be general—a just principle that could be applied to a number of specific cases. Now, who was more likely to reliably articulate such general principles? Who could be trusted more? A single monarch? Or the wider public, whose many members could argue the many cases that the principle needed to cover?

Public debate would transform people’s individual preferences into a rational consensus about what was practically in the interests of everybody. And if government made rules that way, then citizens would be both dominated and free at the same time. Ta-daa!

It was an elegant theory. And for a time, it worked. But one way to summarize the history of the last few centuries (at least across the democratic world) is as an attempt to expose how arrogant this theory also was.

The German philosopher, Georg Wilhelm Friedrich Hegel (1770-1831), called bullshit on two key assumptions, upon which the whole theory rested: first, that a conversation taking place exclusively among property-owners and merchants could ever arrive at an understanding of the universal interest; and second, that in any such conversation, “reason” could rule, free from natural social forces of interference and domination.

At a minimum, the “working class” needed to be included in the conversation. And this is where Karl Marx (1818-1883) and Friedrich Engels (1820-1895) entered world history. “Public opinion,” Marx argued, was really just fancy language that the bourgeoisie (property-owners) used to dress up their the class interests as something good for everyone. The idea that debates in the “public sphere” produced rational laws that made men free wasn’t some profound truth; it was mere ideology. Specifically, it was the ideology of those people who, in the “private sphere,” actually owned something, and therefore needed the protection services that the “public sphere” could supply. The only way to turn the public sphere into the actual factory of freedom that liberals claimed it to be (rather than just another social space in which one class oppressed another) would be to put everything that was private into this sphere. Then, and only then, class divisions would disappear and people would genuinely, rationally debate the communal interest (hence, “Communism”).


Humpty Dumpty (Or, Our Fractured Public Sphere)

But I digress. (Frequently! I know…Sorry!)

Communism was a bust, but the workers movement was not. Marx and Engels helped those who had been on the losing side of the Industrial Revolution recognize themselves as a class with interests and political power. The democratic states that emerged out of the chaos of World War I and II were countries that saw the working class play a much bigger role in society. The vote was expanded to everyone; unions forced companies and governments to set limits on how landlords and business owners could run their apartments and factories; the welfare state was born, and expanded, to protect workers from exploitation, illness and injury and to supply them with “public” goods that had, in the previous century, been largely private—education, healthcare, and law & order.

The point of my long digression is this: Almost since the day it first came into being, the “public sphere” has been losing its claim to be a place for similarly situated citizens to reach reasonable agreement through free conversation. Instead, it’s fractured into a field of competition between plural, conflicting interests—big conflicts (like capital versus labour) that (history suggests) might not rationally fit together again. It’s the Humpty Dumpty problem. And if nothing like rational consensus can possibly emerge from debate between these competing interests, then the whole exercise can, at best, only produce an unstable compromise, one that reflects the present temporary balance of power.

By consequence, the press and media have been losing their claim to be organs of public information and debate. Instead, they’ve become technologies for manufacturing consensus and promoting consumer culture—long before “social media” became a thing. (I think, for example, of how the US government manipulated public opinion during the Vietnam War…has anyone else seen the excellent documentary on the war by Ken Burns on Netflix?)

Jürgen wrote his seminal book on the history of public sphere back in 1962. Already then, he pointed out that at the heart of our democracy, there lies a growing contradiction. On the one hand, the public sphere—that elegant place of rational, public discourse—has shattered. It’s been replaced by “a staged and manipulative publicity,” performed by organized interests before an audience of idea-consumers. But on the other hand, we “still cling to the illusion of a political public sphere,” within which, we imagine, the public performs a critical function over those very same interests that treat it as a mere audience.

What Trump has done is dare to drop the pretense. He uses media technologies not to inform public opinion, but to manipulate it. By his success in doing so, he forces us to recognize that, yes, that is in fact what these technologies are good for. And he forces us to recognize that, no, one does not need to be armed with facts or rational argument to use them for that purpose.


Dominated or Free?

Are we witness, then, to the death of democracy’s central myth?

If so, the implications are grim: we’ve failed, as a political project, to build a society of citizens who are both dominated and free at the same time. Instead, we must be either one or the other, depending on which side won the last election.

Jürgen, for his part, tried 55 years ago to end his assessment on a hopeful note. In his dry academic prose, he wrote, “The outcome of the struggle between a critical publicity and one that is merely staged for manipulative purposes…is by no means certain.”

That’s academic code-speak for, “I’ve defined the problem for you; now go out and fix it!”

(I won’t try to cram a few hasty bullet-point solutions into this letter. Instead let me close by letting you be the first to know that my next book, co-authored with Alan Gamlen, tackles this challenge. But more on that next week…)

Until then,

Brave voyages,

Chris

 

 

Map #34: The American Dream—Or Fantasyland?

Map #34: The American Dream—Or Fantasyland?

After my last letter (which took a deep dive into my friends’ research on the “power of doubt”) and with Donald Trump here in Europe for meetings with NATO, the Queen of England and Vladimir Putin all in the same week, now felt like the ideal moment to read and reflect on Fantasyland: How America Went Haywire, by Kurt Anderson.

When it comes to the state of the world nowadays, it seems we can’t say anything without the aid of one or more phrases that, until recently, we’d never used before: Fake News. Post-Truth. Alternative Facts. Check Google Trends for yourself: these terms barely existed in our lexicon prior to Oct/Nov 2016. The newness of these patterns of speech enhances the impression that something very new (and, depending on your politics, very disturbing) is happening to our political discourse.

But Kurt’s research (which he began back in 2013, long before the political earthquakes of 2016) argues that the present-day collapse of objective truth isn’t new at all. In hindsight, we’ve been eroding its foundations for 500 years.

By “we,” Kurt means Americans, but I think the U.S. case offers a cautionary tale to every democracy that traces its roots back to the Enlightenment.

It certainly got me thinking.

Warmly,

Chris


​A Recipe For The American Dream

To summarize crudely: European settlers to the New World brought with them two sets of beliefs that, when mixed together, produced the early American outlook on life: Enlightenment beliefs about the world, and Puritan beliefs about the self.

Free-Thinking

These settlers had been born into a Europe that was rapidly breaking free from its medieval mindset. In spectacular fashion, the voyages of Columbus, the mathematics of Copernicus and the astronomy of Galileo had all proved that neither the Bible nor the ancient Greeks had a monopoly on truths about the world. Present-day reason and observation could add to—even overturn—the oldest facts about the world.

Self-Made

They had also been born into a Europe that was violently tearing itself into Catholic and Protestant halves. Martin Luther, who began his career as a devout monk, ultimately denounced his Catholic Church for standing in the way between God and Christians. The Catholic Church’s arrogance was to believe that each tier in its hierarchy stood one level closer to God. Ordinary Christians, if they ever hoped to reach God, needed that hierarchy to give them a boost.

Said Luther: Bullshit (or words to that effect). Every baptized Christian had a direct and personal relationship with God, and everything they needed to do to grow that relationship, they could read for themselves in the Bible. Salvation lay not in the grace granted by a priest, but in their own diligent, upright behavior.

Question: What do you get when you combine Enlightenment beliefs about the world (“Facts are findable”) with Puritan beliefs about the self (“I am whatever I work hard to become”)?

Answer: The American Dream (“I can change my reality.”)


​Too Much Of A Good Thing

Taken in moderation, Kurt argues, America’s two settler philosophies powerfully reinforce each other. The free-thinker, who believes that the truth is out there just waiting to be discovered, gives the self-made Puritan something tangible to strive toward: Better knowledge. Deeper understanding. A practical pathway toward that utopian “shining city on a hill.”

Of course, Americans rarely do anything in moderation.

Some 2,400 years ago, Aristotle observed that every virtue is a balancing act. Courage is a virtue. But too much courage becomes a vice, recklessness. Confidence is a virtue. Too much confidence, and we begin to suffer from hubris (a recurring theme in my recent letters, here and here). Honor can slip into the zone of pride and vanity. Generosity can slip into wastefulness.

It’s one of the big, recurring themes in human myth and morality. Daedalus warned his son Icarus to fly the middle course, between the sea’s spray and the sun’s heat. Icarus did not heed his father; he flew up and up until the sun melted the wax off his wings. (He fell into the sea and drowned.) The Buddha taught people to walk the Middle Way—the path between the extremes of religious withdrawal and worldly self-indulgence. And so on.

What would happen if America’s founding philosophies were pushed to their extremes, with no regard for the balance between virtue and vice? To summarize Kurt’s thinking, in my own words:

The same outlook on the external world that entitles us to think for ourselves can lead us to a place of smug ignorance. How far a journey is it from the belief that facts can be found to the belief that facts are relative? (I found my facts; you are free to find yours.) How far a journey is it from the healthy skeptic who searches for the truth “out there,” to the suspicious conspiracy theorist who maintains that the truth is still out there? (i.e., Everything we’ve been told up until now is a cover-up, and no amount of evidence is going to shake my suspicion that the real truth is being hidden from us.)

Likewise, the inward-looking conviction that we are self-made can, if pushed to extremes, leave us self-deluded. If we follow our belief that “I am whatever I work hard to become” far enough, might we arrive at the conviction that “I am whatever I believe I am”? Might we arrive at the conviction that self-belief is stronger than self-truth?

Unbalanced and unrestrained, the same two philosophies that produced self-made free-thinkers in pursuit of the American Dream (“I can change my reality”) can also spawn a Fantasyland of self-deluded ignoramuses, for whom reality is whatever they believe hard enough.


​Keeping It Real

That destination isn’t inevitable. Icarus didn’t have to fly so close to the sun. Enlightenment and Puritan traditions both contained wisdom meant to temper the power of the individual to change things with the ‘serenity’ to accept what he/she could not change. (Most of us have heard the Serenity Prayer, written by the American evangelical, Reinhold Niebuhr (1892-1971), during the Great Depression.)

What separates freedom of thought from smug ignorance? The answer, for Enlightenment thinkers, were the virtues of humility, self-doubt and reasonableness. (If the privileged Truths of the past—a flat Earth, a Flood, a cosmos with humanity at the center—had to accept demotion in the face of new ideas and fresh evidence, what right did any of us have to insulate our own favorite Truths from the same tests?)

And what separates the self-made from the self-deluded? The answer, for early Puritans, were the virtues of hard work, discipline and literacy. (Martin Luther told Christians that they could have a personal relationship with God—‘DIY Christianity’, as Kurt calls it. But they still had to get their hands on a Bible, learn to read it, struggle to decipher it and strive to live by it. His was a harder, more demanding path than the Catholic way of collecting communion and giving to charity once every seven days.)

Unchecked By Reality

These are precisely the virtues that are being eroded in U.S. society, Kurt thinks. And not by the recent emergence of social media, or of cable news, or of unlimited corporate cash in politics. It’s been happening for 500 years.

I’m a poor student of American history. I wouldn’t know how to critique Kurt’s take on it. But he does weave a compelling tale of how, over the centuries, repeated doses of epic individualism and religious fervor have built up the population’s immunity to reality checks. The get-rich-quick gold rushes in Virginia and California. The Salem witch trials. The spectacle (speaking in tongues, channeling the Holy Spirit, seizures and shouting) of America’s early evangelical churches. The wholesale fabrication of new religions, from the Book of Mormon to the Church of Christ, Scientist. Pseudoscientific medical fads, from magnetic healing to alchemy to electric shock therapy to Indian Cough Cure. The Freemason conspiracy behind the American Civil War. P.T. Barnum, Buffalo Bill, Harry Houdini. By the year 1900, reality’s power to impose itself upon the American self or world had been significantly eroded. Or, as Kurt put it:

If some imaginary proposition is exciting, and nobody can prove it’s untrue, then it’s my right as an American to believe it’s true. (p.107)

America’s 20th century was a story of military, economic, social and scientific achievement, but it also saw the discovery of new solvents to dissolve the boundary between reality and fantasy more and more. Orson Welles’s science-fiction radio broadcast, War of the Worlds, caused a real-life panic in New York City. Modern advertising was born, to invent new desires and ways to satisfy them. Napoleon Hill published Think and Grow Rich. In the 1950s, Disneyland opened its doors (so that adults could relive their childhood) and modern Las Vegas was born (so that adults could escape the responsibilities they had accumulated). In the same year, the first James Bond novel and the first Playboy magazine were published. The L.A. science fiction author, L. Ron Hubbard, started up the Church of Scientology. McCarthyism, which accused everyone from leftist filmmakers to President Harry Truman of conspiring with Moscow. The Kennedy assassination, the truth of which is still out there!

Humility and reasonableness; discipline and hard work. These virtues continued to take a beating. In the 1950s, Reverend Norman Vincent Peale (who mentored, among others, Donald Trump) published The Power of Positive Thinking, which Kurt describes as “breezy self-help motivational cheerleading mixed with supernatural encouragement.” It stayed on the New York Times bestseller list for 150 weeks.

Woodstock. Hippies. Beatniks. The New Age thinking of the 1970s: ‘What we see in our lives is the physical picture of what we have been thinking, feeling and believing.’ In other words, we create our own reality.

‘You are you, and I am I. If by chance we find each other, it’s beautiful. If not, it can’t be helped.’ (Fritz Perls, founder of the Esalon Institute) 

ESP. Parapsychology. Science fiction fan conventions and Burning Man festivals. Pro wrestling. Reality television.

The point is this: Yes, new technologies—the internet, mobile, social media—have eliminated the gate-keepers who used to limit the privilege to speak publicly. But how eagerly people listen, and whether people believe what they hear, is a function of audience culture.


​Reality Will Triumph In The End…Won’t It?

What stands between belief in the American Dream and belief in a Fantasyland—once the virtues that acted as a barrier between the two have been dissolved?

One answer, of course, is reality. In Age of Discovery, I (rather airily) advised:

‘We are all going to ignore many truths in our lifetime. Try not to. Ultimately, reality is hard to reject. Healthy, successful people—and societies—are those that build upon it.’

Now that I re-think about it, that argument is too simplistic. Much of human civilization up to now has passed in self-deluded ignorance. Our cave-dwelling ancestors believed that by drawing a picture of their buffalo hunt, they caused the kill to happen. By the standards of modern science, that’s ridiculous. But it worked well enough (perhaps as a motivational tool?) to get our species to the here and now. History shows that humanity can be irrational, can be unreasonable, can be deluded into magical thinking, and still survive. Even thrive.

Reality doesn’t always assert itself in human timescales. We can ignore reality, and get away with it. For a long, long time.

Given the sheer scale of humanity today, we’ve now reached that reckoning point. Mass delusion is a luxury we can no longer afford. Unfortunately, it’s also a hard habit for us to break.


​No Easy Answers. But Clear Questions

Post-truth. Alternative facts. We’ve introduced these new phrases into our language to talk about our suddenly new condition. Only it’s not suddenly new. It’s just suddenly obvious.

How do we navigate society out of this fantasyland and back to a shared reality? The answer is not simply “social media regulation” or “more technology.” These knee-jerk prescriptions might play a part, but they lack the leverage to stop 500 years of momentum in the other direction.

The answer is not simply to “get involved in politics,” either. Politics is a game of public speech that we play within an audience culture. Victory, as we’ve seen, often goes to those who understand what their audience likes to hear, not to those who ask their audience to hear differently.

Nor is the answer to teach our children critical thinking skills in school (although, again, that might play a part). Lies dressed up as facts in the media is just one symptom, one corner, of the wider culture they’re growing up in—a culture in which Kylie Jenner (of the Kardashian clan) is the world’s youngest-ever billionaire and Fifty Shades of Grey is the most-read novel in America.

Kurt, to his credit, offers no easy solutions at the end of his Fantasyland:

What is to be done? I don’t have an actionable agenda. I don’t have Seven Ways Sensible People Can Save America from the Craziness. 

But if his map of the territory is correct, then the only genuine answer is to shift culture. To rebuild the virtues of discipline and hard work, of humility and reasonableness, as a community-owned dam between reality and fantasy. To retreat from self-deluded ignorance back to the healthier zone of self-made free-thinkers.

That sounds nice—and kinda impossible. “Culture” is a fuzzy, emergent property of a population. It’s real, but we can’t touch it.

We can, however, see it—at least, we’re starting to. And, like every other aspect of our social reality (e.g., gender bias, media bias), the clearer we see it, the more it falls under our conscious influence. That’s how cultural revolutions—good and bad, #metoo and #fakenews—happen.

And so, even if there are no obvious exits from fantasyland, the obvious next step is to draw for ourselves clear maps of our current cultural landscape, so that we can hold them up to conscious scrutiny:

1. Who are our heros? Who are our villains? What makes them so?

2. Who are our “priests,” our “thought leaders,” our “social influencers”? What gives them their authority among us?

3. What are our myths—the shared stories that we celebrate and share to exemplify how to be and behave? What virtues do those myths inspire in us? What vices do those virtues entail, if left unchecked?

The more bravely we explore these questions, the better pathfinders we’ll be.

Map #33: The Power Of Doubt

Map #33: The Power Of Doubt

Last week, I helped open OXSCIE 2018, a global summit on the future of education at Oxford University. It was a special event, focussed mainly on challenges in the developing world. Delegates were a balance of education policy makers, academic researchers, donors and teachers, and each of those four groups were represented by the world’s best and most influential. Two personal highlights for me were meeting Andrea Zafirakou, the high school art teacher who won the $1 million ’Best Teacher in the World’ prize for 2018; and Lewis Mizen, a 17-year-old survivor of the Marjory Stoneman Douglas High School shooting in Florida this February (he’s now a political activist). 

The official theme of the summit was “Uncertainty, Society and Education.” That must rank among the broadest conference themes in the history of conferences. 

(As an aside, the list of “broad conference themes” is a crowded field. Did you know?: The single most common conference theme in the world is “Past, Present and Future”? e.g., Astrochemistry: Past, Present and Future; Libraries: Past, Present and Future; even Making Milk: Past, Present and Future…which, as an even further aside, is a much wider and deeper topic than I first imagined! Here’s that conference’s output.) 

Brave voyages indeed,

Chris

 

Too Much V.U.C.A.

Back to Oxford. The organizers (Aga Khan Foundation, Global Centre for Pluralism and Oxford) asked me to set the stage for their two-day event by talking for 10 minutes around the question, “What is uncertainty?” 

(Admittedly I went slightly over time. Well, way over time.)

Because I love this question. I love it, because it challenges us all to stop and think about a word—uncertainty—that gets thrown around more and more everyday. 

At virtually every leadership-ish event I go to these days, people talk about the V.U.C.A. world. Volatile, Uncertain, Complex, Ambiguous. The acronym originated in the US Army War College in the early 1990s (see the original, much-photocopied essay), where, in the aftermath of the Cold War, military officers and students struggled to make sense of the new strategic environment. It has entered into popular language today in executive forums because it seems to capture the mood that many people—especially people burdened with decision-making responsibilities—feel.

In my slideshow, I threw up a couple random stock illustrations of the V.U.C.A. acronym that I had pulled off the Internet. (Another aside: The availability of stock art to illustrate how to make sense of the world is, I think, a big red flag that we need to examine these sense-makings more closely before we adopt them ourselves!) 

If you run this Google Image search yourself, you’ll come across some stock definitions that are laughably bad. e.g.,

Uncertainty: When the environment requires you to take action without certainty.

Others hit closer to the mark:

Uncertainty speaks to our very human inability to predict future outcomes. In uncertain environments, people can be hesitant to act because of their inability to be sure of the outcome, leading to inaction and missed opportunities.

This definition, I think, captures the concepts that decision-makers have in mind when they talk about uncertainty today: unpredictability, hesitation, inaction, missed opportunities. The broad notion is that uncertainty is a bad thing, a negative, because ideally what we’d possess whenever we make decisions is certainty:

Is that the right action to take? Yeah, I’m sure. Are you certain? I’m certain. Then what are you waiting for? Go and do it.

 

Do you love that person? Yeah. Are you certain? I’m certain. Then marry them.

Certainty is our preferred foundation for the biggest, most important decisions that we make in life.

If that’s right—and if uncertainty is the absence of that kind of decision-making ability—then we’re in trouble. We’re going to be hesitant and miss out on a lot of opportunities in our lifetime—because the present world is full of uncertainty. 

So Much We Don’t Know

Consider a few of the biggest and most obvious causes of uncertainty about humanity’s present condition:

Urban Unknowns

Take urbanization. The world boasted two mega-cities of more than 10 million people in 1950—New York and Tokyo. Today there are 40, two-thirds of which are in the developing world. Humanity’s urban population has quadrupled in the past 75 years, and that quadrupling has put everyone in closer proximity to everyone else. Now, 95% of humanity occupies just 10% of the land. Half of humanity lives within one hour of a major city—which is great for business, and also great for the spread of disease. Despite all our medical advances, the 2017/18 winter was the worst flu season in a decade in North America and Europe. Why? Because it was the worst flu season in a decade in the Australian winter, six months prior. Thanks to the global boom in livestock trade and tourism, we now live in a world where, whenever Australia sneezes, the rest of us might get sick. And vice versa.

Environmental Iffiness 

Or take environmental change. In 1900, there were 1.6 billion humans on the planet, and we could ignore questions of humankind’s relationship with the biosphere. Now we are 7.4 billion humans, and it seems we can no longer do so—an inconvenient truth which South Africa re-learned this spring when Cape Town’s water reservoirs dried up, and which the UK re-learns every year during its now-annual spring flash floods.

Maybe renewable energy technologies will solve our biggest climate problems. In just the past twenty years, renewable power generation has grown nine-fold, from 50 MToE (million tonnes of oil equivalent) in 1998 to 450 MToE today. 

Or maybe not. Looking twenty years into the future: on current trends, by 2040 renewables will still only make up 17% of the global energy supply, up from 12% today. The world keeps growing, and growth is energy intensive. 

Demographic Doubts

Take demographics. At the beginnings of life: Fertility is falling worldwide. Birth rates are already at, or below, replacement levels pretty much everywhere except in Africa. At the end of life: Life expectancy is rising worldwide. In 1950, the average human died at age 50; today, at age 70. (That latter statistic is maybe the most powerful, least debatable, evidence of “progress” that human civilization might ever compose.)

The combination of both those trends means that humanity is aging rapidly, too. In 1975, the average Earthling was 20 years old; today, the average Earthling is 30 years old. That extra decade has mammoth consequences for every aspect of society, because virtually every human want and need varies with age. 

One of the easiest places to see this impact is in the public finances of rich countries. In the US, to deliver current levels of public services (everything from education to health care to pensions) to the projected population in 2030, taxpayers will need to find an additional US$940 billion. In the UK, they’ll need to find another US$170 billion, and in Canada they’ll need to find another US$90 billion. Why? Fewer taxpayers, more recipients. How will we fill this funding gap?

Economic and Political Insecurities

Meanwhile, a tectonic shift in global economics is underway. By 2050, China’s economy will be twice the weight of the US’s in the world. India’s will be larger, too. What changes will that bring? 

Geopolitics is going through a redesign. The post-WWII liberal, rules-based international order is anchored in a handful of multilateral institutions: the UN, the WTO, the IMF, the World Bank, the EU, the G7. In the very countries that built this order, more and more people are asking, “Is this the order that can best deliver me peace and prosperity in the future?” Those who loudly shout “No!” are winning elections and shifting the tone and content of democratic discourse. In the US and UK, trust in government has fallen to about 40%. Q: Which country’s public institutions boast the highest trust rating in 2018? A: China (74%). All of which begs the question: Whose ideas will govern the future? 

Analysis Paralysis…?

In short, humanity is sailing headlong into trends that will transform us all in radical, unpredictable ways—and that’s before we even begin to consider the “exponential technologies” that occupied my last two letters.

We do live in a moment of big unknowns about the near future—bigger, maybe, than at any other time in human history. This could be the proverbial worst of times, or best of times—and both seem possible. 

So if “uncertainty” is what we generally take it to be (i.e., an unwanted ignorance that makes us hesitate), then our world must be full of indecision and inaction and missed opportunities right now.

I’m More Worried About Certainty 

Here’s where I call bullshit on this whole conception of uncertainty. It just doesn’t feel right.

And feeling is the missing dimension. To define “uncertainty” as our degree of not-knowing about the world around us is, I think, only half-right. Half of uncertainty is “What do we know?”, out there. But the other half of uncertainty is “How do we feel about that?”, in here.

Once we understand that uncertainty exists in this two-dimensional space, I think we better appreciate its positive role in our thinking and acting. And I think we discover that the danger zone isn’t uncertainty. It’s certainty.

Four Sins Of Certainty 

Take the fearless know-it-all. “I understand my domain better than anyone else. That’s how I’ve gotten to where I am today, and that’s why I can make bold moves that no one else is willing to.” We call that myopia.

What about the anxious expert? They’ve done everything possible to prepare themselves for the job in front of them. That’s why, if they fail, they know the depth of their failure better than anyone. That’s what we call angst.

How is it possible to be certain AND not know? By being certain that you don’t know—and that you need to know. You need to act, and you know that the consequences of acting wrong can break you. You see all the variables, but no possible solution. So you’re stuck. We call that paralysis.

To illustrate each of these three ‘sins’, I mapped them to the recent behaviors of three prominent political leaders. I tried really hard to think of a world leader who would fit into the fourth quadrant…but so far I’ve drawn a blank. Any ideas? It would have to be somebody who (a) doesn’t know anything, but (b) doesn’t care because knowledge is for losers. What matters is willpower. 

We call that hubris. 

Four Virtues Of Uncertainty 

The healthy role for uncertainty, it seems, is to protect us against these extremes:

Uncertainty can challenge myopia, by confronting the know-it-all with a diversity of views. 

Uncertainty can validate the angst-ridden agent. Through peer mentoring and support, we are reminded that, even if our mistakes have been great, and final, those losses do not say everything about us. Validating voices force us to recognize the positives that we’ve omitted from our self-knowledge.

Uncertainty can replace our paralysis with a focus on learning. True, some critical factors may be unknowable, and some past actions may be unchangeable. But not all. A good first step is to identify the things we can affect—and understand them better.

Uncertainty can prepare  us to weather the consequences of hubris. Maybe you are always right. Maybe you do have magic eyeglasses to see reality more clearly than everyone else. But just in case, let’s do some risk management and figure out what we’ll do if it turns out that you aren’t, or don’t.

 

The Power Of Doubt 

A few letters ago, I mused about the danger of too much truth in democratic societies:

When too many people believe they have found truth, democracy breaks down. Once truth has been found, the common project of discovery is complete. There is no more sense in sharing power with those who don’t realize it…To rescue the possibility of groping toward Paradise democratically, we need to inject our own group discourses with doubt. 

Now I’m beginning to understand that the power of doubt extends far beyond the political realm. It’s an important ingredient in all domains of leadership—from figuring out the future of education, to making decisions in our professional lives, to maintaining our mental health. (A couple of my friends at Oxford Saïd Business School, Tim Morris and Michael Smets, did a major study among CEOs on this topic.) 

Can we get comfortable with being uncomfortable? That, I think, is one of the critical skills we all need to develop right now. 

Map #32: Homo Hubris? (Part 2 of 2)

Map #32: Homo Hubris? (Part 2 of 2)

Technology drives change. Does it also drive progress? 

Those eight words sum up a lot of the conversation going on in society at the moment. Some serious head-scratching about the whole relationship between “technology” and “progress” seems like a good idea.

In Part 1, I summarized “four naïveties” that commonly slip into techno-optimistic views of the future. Such views gloss over: (1) how technology is erasing the low-skilled jobs that, in the past, have helped poor countries to develop (e.g. China); (2) how, in a global war for talent, poorer communities struggle to hold onto the tech skills they need; (3) how not just technology, but politics, decides whether technological change makes people better off; and (4) how every technology is not just a solution, but also a new set of problems that society must manage well in order to realize net gains.

Technology = Progress?

The deepest naïveté—the belief lurking in the background of all the above—is that technological change is a good thing.

This is one of the Biggest Ideas of our time—and also one of the least questioned…

It wasn’t always so obviously true. In 1945, J. Robert Oppenheimer, upon witnessing a nuclear explosion at the Manhattan Project’s New Mexico test site, marked the moment with a dystopian quote from the Bhagavad Gita: “I am become death, destroyer of worlds.” 

But within ten years, and despite the horrors of Hiroshima and Nagasaki, a far more utopian spin on the Atomic Age had emerged. Lewis Strauss, architect of the U.S. “Atoms for Peace” Program and one of the founding members of the Atomic Energy Commission, proclaimed in 1954 that: 

It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter, and will know of great periodic famines in the world only as matters of history. They will travel effortlessly over the seas and under them, and through the air with a minimum of danger and at great speeds. They will experience a life span far longer than ours as disease yields its secrets and man comes to understand what causes him to age. 

What happened in the years between those two statements to flip the script from techno-dystopia to techno-utopia?

Wartime state-sponsored innovation yielded not only the atomic bomb, but: better pesticides and antibiotics; advances in aviation and the invention of radar; plastics and synthetic fibers; fertilizers and new plant varieties; and of course, nuclear energy.

Out of these achievements, a powerful idea took hold, in countries around the world: science and technology meant progress. 

In the U.S., that idea became official government dogma almost immediately after the war. In a famous report,Science: The Endless Frontier, Vannevar Bush (chief presidential science advisor during WWII, leader of the country’s wartime R&D effort and founder of the U.S. arms manufacturer Raytheon) made the case to the White House that (a) the same public funding of sciences that had helped win the war would, if sustained during peace-time, lift society to dizzying new heights of health, prosperity and employment. It also warned that (b) “without scientific progress, no amount of achievement in other directions can insure our health, prosperity and security as a nation in the modern world.” But Vannevar also framed the public funding of scientific and technological research as a moral imperative:

It has been basic United States policy that Government should foster the opening of new frontiers. It opened the seas to clipper ships and furnished land for pioneers. Although these frontiers have more or less disappeared, the frontier of science remains. It is in keeping with the American tradition—one which has made the United States great—that new frontiers shall be made accessible for development by all American citizens.

Moreover, since health, well-being and security are proper concerns of Government, scientific progress is, and must be, of vital interest to Government. Without scientific progress the national health would deteriorate; without scientific progress we could not hope for improvement in our standard of living or for an increased number of jobs for our citizens; and without scientific progress we could not have maintained our liberties against tyranny.

In short, science and technology = progress (and if you don’t think that, there’s something unpatriotic—and morally wrong—about your thinking).

The High Priests of Science & Technology Have Made Believers Of Us All 

In every decade since, many of the most celebrated, most influential voices in popular culture have been those who repeated and renewed this basic article of faith—in the language of the latest scientific discovery or technological marvel. E.g., 

1960s: John F. Kennedy’s moonshot for space exploration; Gordon Moore’s Law of exponential growth in computing power; the 1964-65 New York World’s Fair (which featured future-oriented exhibits like Bell Telephone’s PicturePhone and General Motors’ Futurama)

1970s: Alvin Toffler’s Future Shock, which argued that technology was now the primary driver of history; Carl Sagan, who argued that scientific discovery (specifically, in astronomy) reveals to us the most important truths of the human condition; Buckminster Fuller, who argued that breakthroughs in chemistry, engineering and manufacturing would ensure humanity’s survival on “Spaceship Earth” 

We can make all of humanity successful through science’s world-engulfing industrial evolution. 

– Buckminster Fuller, Operating Manual for Spaceship Earth (1968)

1980s: Steve Jobs, who popularized the personal computer (the Mac) as a tool for self-empowerment, self-expression and self-liberation (hence, Apple’s iconic ”1984” TV advertisement); Erik Drexler, the MIT engineer whose 1986 book Engines of Creation: The Coming Era of Nanotechnology, imagined a future free from want because we’ll be able to assemble anything and everything we need, atom-by-atom; Hans Moravec, an early AI researcher whose 1988 book, Mind Children, applied Moore’s Law to the emerging field of robotics and neuroscience and predicted that humanity would possess godlike powers of Creation-with-a-capital-C by 2040. Our robots would take our place as Earth’s most intelligent species.

1990s: Bill Gates, whose vision of “a computer on every desktop” equated improved access to Microsoft software with improvements in human well-being; Ray Kurzweil, another AI pioneer, who argued in Age of Intelligent Machines (1990), Age of Spiritual Machines (1999) and The Singularity is Near (2005) that the essence of what makes us human is to reach beyond our limits. It is therefore inevitable that science and technology will eventually accomplish the next step in human evolution: the transhuman. By merging the “wetware” of human consciousness with computer hardware and software, we will transcend the biological limits of brainpower and lifespan. 

2000s: Sergey Brin and Larry Page, who convinced us that by organizing the world’s information, Google could help humanity break through the barrier of ignorance that stands between us and the benefits that knowledge can bring; Steve Jobs (again), who popularized the smartphone as a tool of self-empowerment, self-expression and self-liberation (again), by making it possible for everyone to digitize everything we see, say, hear and touch when we’re not at our desks.

2010s: Mark Zuckerberg, who, in his Facebook manifesto, positions his company’s social networking technology as necessary for human progress to continue: 

Our greatest opportunities are now global—like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses—like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community…Facebook develops the social infrastructure to give people the power to build a global community that works for all of us. 

(Facebook, apparently, is the technology that will redeem us all from our moral failure to widen our ‘circle of compassion’ [as Albert Einstein called it] toward one another.)

Elon Musk likewise frames his SpaceX ‘Mars-shot’ as necessary. How else will humanity ever escape the limits of Spaceship Earth? (Seventy-five years after Vannevar’s Endless Frontiers report, we now take for granted that “escaping” such “limits” is the proper goal of science—and by extension, of society.)

And last (for now, at least), Yuval Harari, whose latest book, Homo Deus: A Brief History of Tomorrow, says it all in the title.

Science and technology is the engine of human progress. That idea has become so obviously true to modern minds that we no longer recognize it for what it really is: modernity’s single most debatable premise. 

Rather than debate this premise—a debate which itself offers dizzying possibilities of progress, in multiple dimensions, by multiple actors—we quite often take it as gospel. 

Rather than debate this premise, Yuval instead takes it to its ultimate conclusion, and speaks loudly the question that the whole line of High Priests before him quietly whispered: Do our powers of science and technology make us gods? 

It is the same question that Oppenheimer voiced in 1945, only now it’s been purified of all fear and doubt.

We Can Make Heaven On Earth

“Utopia,” which Thomas More coined in his book by the same name in 1516, literally means “no place.” In the centuries since, many prophets of this or that persuasion have painted utopian visions. But what makes present-day visions of techno-utopia different is the path for how we get there. 

In the past, the path to Utopia called for an impossible leap in human moral behavior. Suddenly, we’ll all follow the Golden Rule, and do unto others as we would have done unto us. Yeah, right. 

But today’s path to techno-Utopia calls for a leap in science and technology—in cybernetics, in artificial intelligence, in biotechnology, in genetic manipulation, in molecular manufacturing. And that does seem possible…doesn’t it? Put it this way: Given how far our technology has come since the cracking of the atom, who among us is willing to say that these breakthroughs are impossible?

And if they are not impossible, then Utopia is attainable. Don’t we then have a duty—a moral duty—to strive for it?

This argument is so persuasive today because we have been persuading ourselves of it for so long. Persuasive—and pervasive. It is the basic moral case being made by a swelling number of tech-driven save-the-world projects, the starkest example of which is Singularity University. 

I find it so compelling, that I don’t quite know what to write in rebuttal…

Gods—Or Slaves?

Until I recall some the wisdom Hannah Arendt, or Zygmunt Bauman, or remember my earlier conversation with Ian, and remind myself that technology never yields progress by itself. Technology cannot fix our moral and social failings, because those same failings are embedded within our technologies. They spread with our technologies. Our newest technology, A.I. (which learns our past behaviors in order to repeat them), is also the plainest proof of this basic truth. More technology will never be the silver-bullet solution to the problems that technology has helped create. 

And so we urgently need to delve into this deepest naïveté of our modern mindset, this belief that technological change is a good thing.

How might we corrupt our techno-innocence? 

One thing that should leap out from my brief history of the techno-optimistic narrative is that most of the narrators have been men. I don’t have a good enough grasp of gender issues to do more than point out this fact, but that right there should prompt some deep conversations. Question: Which values are embedded in, and which values are excluded from, tech-driven visions of human progress? (E.g., Is artificial enhancement an expression of humanity’s natural striving-against-limits, or a negation of human nature?)

As a political scientist, I can’t help but ask the question: Whose interests are served and whose are dismissed when technology is given pride of place as the primary engine of our common future? Obviously, tech entrepreneurs and investors do well: Blessed are the tech innovators, for they are the agents of human progress. At the same time: Accursed are the regulators, for they know not what they govern. 

Yuval slips into this kind of thinking in his Homo Deus, when he writes:

Precisely because technology is now moving so fast, and parliaments and dictators alike are overwhelmed by data they cannot process quickly enough, present-day politicians are thinking on a far smaller scale than their predecessors a century ago. Consequently, in the early twenty-first century politics is bereft of grand visions. Government has become mere administration. It manages the country, but it no longer leads it.

But is it really the speed of technological change, is it the scale of data, that limits the vision of present-day politicians? Or is it the popular faith that any political vision must accommodate the priorities of technological innovators? For all its emerging threats to our democracy, social media must be enabled. For all its potential dangers, research into artificial intelligence must charge ahead. Wait, but—why? 

Why!?! What an ignorant question!

And while we’re on the topic of whose interests are being served/smothered, we should ask: whose science and technology is being advanced, and whose is being dismissed? “Science and technology” is not an autonomous force. It does not have its own momentum, or direction. We determine those things. 

The original social contract between science and society proposed by Vannevar Bush in 1945 saw universities and labs doing pure research for its own sake, guided by human curiosity and creativity. The private sector, guided by the profit motive, would then sift through that rich endeavor to find good ideas ready to be turned into useful tools for the rest of us. But the reality today is an ever closer cooperation between academia and business. Private profit is crowding out public curiosity. Research that promises big payoffs within today’s economic system usually takes precedence over research that might usher in tomorrow’s…

Homo Humilitas 

All predictions about the future reflect the values and norms of the present. 

So when Yuval drops a rhetorical question like, Will our powers of science and technology one day make us gods?, it’s time to ask ourselves tough questions about the value we place on technology today, and what other values we are willing to sacrifice on its altar. 

The irony is that, just by asking ourselves his question—by elevating science and technology above other engines of progress, above other values—we diminish what humanity is and narrow humanity’s future to a subset of what might be.

It is as if we’ve traded in the really big questions that define and drive progress—“What is human life?” and “What should human life be?”—for the bystander-ish “What does technology have in store for our future?”

That’s why I suspect that the more we debate the relationship between technology and progress, the more actual progress we will end up making. 

I think we will remind ourselves of the other big engines of progress at society’s disposal, like “law” and “culture” and “religion,” which are no less but no more value-laden than “technology.”

I think we will remind ourselves of other values, some of which might easily take steps backward as technology “progresses”. E.g., As our powers to enhance the human body with technology grow stronger, will our fragile, but fundamental, belief in the intrinsic dignity of every human person weaken? 

I think we will become less timid and more confident about our capacity to navigate the now. Within the techno-utopian narrative, we may feel silenced by our own ignorance. Outside of that narrative, we may feel emboldened by our wisdom, our experience, our settled notions of right and wrong. 

I think we will recall, and draw strength from, examples of when society shaped technology, and not the other way around. In the last century, no technology enjoyed more hype than atomic energy. And yet just look at the diversity of ways in which different cultures incorporated it. In the US, where the nuclear conversation revolves around liability, no new nuclear plant has opened since the Three Mile Island accident of 1979. In Germany, where the conversation revolves around citizens’ rights to participate in public risk-taking, the decision was taken in 2011 to close down all 17 reactors in the country—in direct response to the Fukushima meltdown in Japan. Meanwhile in South Korea, whose capital Seoul is only 700 miles from Fukushima, popular support for the country’s 23 reactors remained strong. (For South Koreans, nuclear technology has been a symbol of the nation’s independence.) 

And I think we will develop more confidence to push back against monolithic techno-visions of “the good.” Wasn’t the whole idea of modernity supposed to be, as Nietzsche put it, “God is dead”—and therefore we are free to pursue a radical variety of “goods”? A variety that respects and reflects cultural differences, gender differences, ideological differences… Having done the hard work to kill one idea of perfection, why would we now all fall in line behind another? 

Four Little Questions To Reclaim The Future 

None of the above is to deny that technology is a profound part of our lives. It has been, since the first stone chisel. But we hold the stone in our hands. It does not hold us.

Or does it? After decades of techno-evangelism, we risk slipping into the belief that if we can do it, we should do it. 

Recent headlines (of cybercrime, social media manipulation, hacked public infrastructure and driverless car accidents) are shaking that naïveté. We understand, more and more, that we need to re-separate, and re-arrange, these two questions, in order to create some space for ethics and politics to return. What should we do? Here, morality and society must be heard. What can we do?  Here, science and technology should answer.

Preferably in that order. 

It’s hard to imagine that we’ll get there. But I think: the more we debate the relationship between technology and progress, the more easily we will find our rightful voice to demand of any techno-shaman who intends to alter society:

  1. What is your purpose? 
  2. Who will be hurt? 
  3. Who will benefit? 
  4. How will we know?

By asking these four simple questions, consistently and persistently, we can re-inject humility into our technological strivings. We can widen participation in setting technology’s direction. And we can recreate genuinely shared visions of the future. 

Map #32: Homo Hubris? (Part 1 of 2)

Map #32: Homo Hubris? (Part 1 of 2)

On the heels of my recent foray into A.I., I’ve been reading a bunch of recent books on our coming technological utopia: Yuval Harari’s Homo Deus, Peter Diamandis’s Abundance, Steven Pinker’s Enlightenment Now and Ray Kurzweil’s The Singularity Is Near. They’re quick reads, because they all say basically the same thing. Thanks to emerging ‘exponential technologies’ (i.e., Artificial Intelligence, Internet of Things, 3D printing, robotics and drones, augmented reality, synthetic biology and genomics, quantum computing, and new materials like graphene), we can and will build a new and better world, free from the physical, mental and environmental limits that today constrain human progress. 

It all seems so clear. So possible. 

To muddy the waters, I sat down for coffee with my pal and frequent co-author Ian Goldin, who throughout his whole career—from advising Nelson Mandela in his home country South Africa, to helping shape the global aid agenda as a senior exec at the World Bank—has labored to solve poverty. (His latest book, Development: A Very Short Introduction, is, as the title suggests, an excellent starting point on the subject.)

I wanted to explore the ‘hard case’ with Ian. And the hard case is poverty, in its many manifestations. Whether these ‘exponential technologies’ relieve me of the burden of leaving my London flat to buy shaving blades is one thing; whether they can help relieve the burden of poor sanitation in Brazilian favelas is another thing entirely. 

Question: will the sexy new technologies that scientists and techies are now hyping really give us the weapons we need to solve the hard problems that plague the world’s poor? 

Answer: Not unless we first address ‘four naïvetés’ in our thinking about the relationship between ‘technology’ and ‘development.’ 

Vanishing Jobs: No Ladder To Climb

The first naïveté concerns jobs. Automation is making the existing model for how poor people get richer obsolete. How did China grow its economy from insignificant in the 1970s, to the world’s largest today? That ascent began with low-cost labor. Foreign manufacturers moved their factories to China. The money they saved by paying lower wages (in the early 1990s, the average Chinese factory wage was US$250 per year) more than offset the increased cost of shipping their products all the way from China to their customers.

Today, average factory wages in China are quite a lot higher (the latest stat I’ve seen is US$10,000 per year). The countries that can boast loudest about their low-cost labor supply in 2018 are places like Burundi, Malawi and Mozambique. Unfortunately for them, fewer and fewer foreign manufacturers see low-cost labor as a winning strategy. Nowadays, in industries ranging from smartphones to automobiles, increasingly capable and affordable factory robots can crank out more, better, customized products than an assembly line staffed by humans ever could. In the rapidly arriving world of robot factories, it is not the cost of labor, but rather the cost of capital, that determines a factory’s profitability. And capital—whether in the form of a bank loan, a public offering of stock, or private equity investment—is much cheaper and easier to raise in the mature financial markets of New York than in New Delhi or Côte d’Ivoire. How will Africa ever repeat China’s economic climb, if the first and lowest rung on the development ladder—i.e., a low-cost labor advantage—has been sawed off by robots? 

Gravity’s Pull (and the Pooling of Scarce Skills)

The second naïveté concerns choice. It’s a safe assumption that births of big-brained people are evenly distributed across the whole of humanity. From Canada to Cameroon, a similar share of the population is born with the raw intellectual horsepower needed to understand and push the boundaries of today’s science and technology. And thanks to the internet, mobile data and digital knowledge platforms, whether in Canada or the Central African country of Cameroon, such big-brained people now have a better chance than at any other time in history to nurture that intelligence. Genius is flourishing. Globally.

But as it matures, genius tends to pool in just a few places. That’s because, while the odds of winning the intelligence lottery at birth might be distributed evenly everywhere, the opportunities to cash in that winning ticket are not. Those opportunities pool. Within countries, they pool in the cities and on the coastlines. Between countries, they pool in the fastest-growing and richest economies. If I am a talented data scientist in Cameroon, am I going to start up a business in my capital city of Yaoundé (maybe) or (more likely) get on a plane to Silicon Valley, where the LinkedIns and Facebooks of the world today dangle US$2 million starter salaries in front of people with skills like mine? (Right now, even top-tier US universities struggle to retain skilled staff when Silicon Valley comes recruiting. How on earth can Cameroon compete?)

If technology does drive progress, and if the skills needed to drive the technology are scarce, then progress will remain uneven—and poorer places will continue to lag behind. 

 

Politics Are Unavoidable—And Decisive 

The third naïveté (or maybe it’s 2(b)) concerns distribution. Every technology has strong distribution effects. It generates winners and losers. Some own it; others pay to use it. Some build it (and accumulate valuable equity); others buy it (and accumulate debt). Some talents are in high demand, and salaries soar; some talents are no longer required, and jobs are lost. 

That’s life. How society chooses to respond to these distribution effects is a political question, one that every community answers for itself (albeit with varying degrees of popular awareness and participation). Public institutions and laws passed by the political system (regarding, say: property rights, taxation, transparency and oversight) shape what happens after the gains and losses are…er…won and lost.

If the big topic we’re interested in is “progress”, then we need to take an interest in these political questions. Technologies never, ever yield progress by themselves. (For the clearest evidence, look no further than the United States. Since 1990, U.S. society has undergone astonishing an technological transformation: the advent of the Internet; the advent of mobile phones (and now, the mass adoption of mobile broadband data); the mapping of the human genome and advent of genetic medicine; the advent of autonomous vehicles; the advent of commercial space travel; the advent of e-commerce and social media; the advent of 3D printing and nanotechnology and working quantum computers; the advent of turmeric lattes. And yet, for all that, the average salary of people in the bottom 20% of US wage-earners is lower today, in real-dollar terms, than it was 28 years ago. Put another way, the bottom 20% of wage-earners are taking home less pay today than they did back when the overall US economy was only half its current size. If we’re talking about economic progress, it’s pretty clear that there’s been a lot for some, and less than none for others. 

All ‘Technology’ Is A Solution AND A Problem 

The fourth naïveté concerns the social nature of technology. Technology may be a solution, yes, but it is not only that. It is also a package of unintended risks and consequences that need to be managed by society. The most infamous example is the Bhopal disaster in India in 1984. A leak of toxic gas at Union Carbide’s pesticide plant killed thousands and injured hundreds of thousands more. It was the 20th century’s worst industrial accident. 

The intended consequence was to catapult India’s farmers into the future. A Green Revolution was underway! New, chemical fertilizers were lifting crop yields to never-before-seen heights! By importing the latest fertilizer technology from the U.S., India would join that Revolution and banish the specter of mass starvation from its borders. 

Instead, the disaster demonstrated how badly wrong things can go when we transfer the material components of a technology from one society to another, but ignore its social components: risk assessment, regulation, ethics, and public awareness and participation. 

In short: a society is a great big, complex system. Technology is just one input. Other inputs, stocks, flows and feedback loops also determine what the eventual outputs will be. 

Technology = Progress?

The deepest naïveté—the belief lurking in the background of all the above—is that technological change is a good thing.

This is one of the Biggest Ideas of our time—and also one of the least questioned…

To be continued next Sunday…

Map #31: A.I. — Humanity’s Lab Rat Or Robot Overlord?

Map #31: A.I. — Humanity’s Lab Rat Or Robot Overlord?

’I have always believed that any scientific concept can be demonstrated to people with no specialist knowledge or scientific education.’ Richard Feynman, Nobel physicist (1918-1988)

I feel like the whole field of AI research could take a cue from Richard Feynman. Why is something that computer scientists, businesses, academics and politicians all hype as “more important to humanity than the invention of fire” also so poorly explained to everyone? How can broader society shape the implications of this new Prometheus if no one bothers to frame the stakes in non-specialist terms?

This is a project that I’m working on, and I’d love to hear your thoughts on my initial attempts. For starters, in this letter I want to map out, in non-specialist language, where AI research has been and where it is today. 

The Temple of Geekdom

I had coffee recently with the head of AI for Samsung in Canada, Darin Graham. (He’s heading up one of the five new AI hubs that Samsung is opening globally; the others are in Silicon Valley, Moscow, Seoul and London.)

Hanging out with a bona fide guru of computer science clarified a lot of things for me (see this letter). But Darin demanded a steep price for his wisdom: he made me promise to learn a programming language this year, so that the next time we talk I can better fake an understanding of the stuff he’s working on. 

Given my chronic inability to say “No” to people, I now find myself chipping away at a certification in a language called Python (via DataCamp.com, which another guru-friend recommended and which is excellent). Python is very popular right now among data scientists and AI developers. The basics are quick and easy to learn. But what really sold me on it was the fact that its inventor named it after Monty Python’s Flying Circus. (The alternatives—C, Java, Fortran—all sound like they were invented by people who took themselves far too seriously.) 

To keep me company on my pilgrimage to the temple of geekdom, I also picked up a couple of the latest-and-greatest textbooks on AI programming: Fundamentals of Deep Learning (2017) and Deep Learning (2018). Again, they clarified a lot of things.

(Reading textbooks, by the way, is one of the big secrets to rapid learning. Popular books are written in order to sell copies. But textbooks are written to teach practitioners. So if your objective is to learn a new topic, always start with a good textbook or two. Reading the introduction gives you a better grounding in the topic than reading a hundred news articles, and the remaining chapters take you on a logical tour through (a) what people who actually work in the field think they are doing and (b) how they do it.) 

Stage One: Telling Computers How (A.I. As A Cookbook)

Traditional computer programming involves telling the computer how to do something. First do this. Now do that. Humans give explicit instructions; the computer executes them. We write the recipe; the computer cooks it. 

Some recipes we give to the computer are conditional: If THIS happens, then do THAT. Back in the 1980s and early 1990s, some sectors of the economy witnessed an AI hype-cycle very similar to the one we’re going through today. Computer scientists suggested that if we added enough if…then decision rules into the recipe, computers would be better than mere cooks; they’d be chefs. Or, in marketing lingo: “expert systems.” After all, how do experts do what they do? The answer (it was thought) was simply: (a) take in information, then (b) apply decision rules in order to c) reach a decision. 

It seemed a good job for computers to take over. Computers can ingest a lot more information, a lot faster, than any human can. If we can tell them all the necessary decision rules (if THIS…, then THAT), they’ll be able to make better decisions, faster, than any human expert. Plus, human expertise is scarce. It takes a long time to reproduce—years, sometimes decades, of formal study and practical experience. But machines can be mass manufactured, and their software (i.e., the cookbooks) can be copied in seconds.

Imagine the possibilities! Military and big business did just that, and they invested heavily into building these expert systems. How? By talking to experts and watching experts in order to codify the if THIS…, then THAT recipes they followed. A new age of abundant expertise lay just around the corner.

Or not. Most attempts at the cookbook approach to computerizing expertise failed to live up to the hype. The most valuable output from the “expert system” craze was a fuller understanding, and appreciation, for how experts make decisions. 

First, it turned out that expert decision rules are very hard to write down. The first half of any decision rule (if THIS…) assumes that we’ve seen THIS same situation before. But experts are rarely lucky enough to see the same situation twice. Similar situations? Often. But ‘the same’? Rarely. The value of their expertise lies in judging whether the differences they perceive are relevant—and if so, how to modify their decision (i.e., …then THAT) to the novelties of the now. 

Second, it turned out that there’s rarely a one-to-one relationship between THIS situation and THAT decision. In most domains of expertise, in most situations, there’s no single “right” answer. There are, instead, many “good” answers. (Give two Michelin-starred chefs the same basket of ingredients to cook a dish, and we’d probably enjoy either one.) We’ll probably never know which is the “best” answer, since “best” depends, not just on past experience, but on future consequences—and future choices—we can’t yet see. (And, of course, on who’s doing the judging.) That’s the human condition. Computers can’t change it for us.

Human expertise proved too rich, and reality proved too complex, to condense into a cookbook. 

But the whole venture wasn’t a complete failure. “Expert systems” were rebranded as “decision support systems”. They couldn’t replace human experts, but they could be valuable sous-chefs: by calling up similar cases at the click of a button; by generating a menu of reasonable options for an expert to choose from; by logging lessons learned for future reference. 

Stage Two: Training Computers What (From Cooks to Animal Trainers)

Many companies and research labs that had sprung up amidst the “expert system” craze went bust. But the strong survived, and continued their research into the 2000s. Meanwhile, three relentless technological trends transformed the environment in which they worked, year by year: computing power got faster and cheaper; digital connections reached into more places and things; and the production and storage of digital data grew exponentially.

This new environmental condition—abundant data, data storage, and processing power—inspired a new approach to AI research. (It wasn’t actually ‘new’; the concept dated back to at least the 1950s. But the computing technology available then—knobs and dials and vacuum tubes—made the approach impractical.) 

What if, instead of telling the computer exactly how to do something, you could simply train it on what to do, and let it figure out the how by itself? 

It’s the animal trainer’s approach to AI. Classic stimulus-response. (1) Supply an input. (2) Reward the outputs you want; punish the outputs you don’t. (3) Repeat. Eventually, through consistent feedback from its handler, the animal makes its own decision rule—one that it applies whenever it’s presented with similar inputs. The method is simple but powerful. I can train a dog to sit when it hears the sound, “Sit!”; or point when it sees a bird in the grass; or bark when it smells narcotics. I could never tell the dog how to smell narcotics, because I can’t smell them myself. But I don’t need to. All I need to do is give the dog clear signals, so that it infers the link between its behaviors and the rewards/punishments it receives. 

This “Machine Learning” approach has now been used to train systems that can perform all kinds of tricks: to classify an incoming email as “spam” or not; to recognize objects in photographs; to pick out those candidates most likely to succeed in Company X from a tall pile of applicants; or (here’s a robot example) to sort garbage into various piles of glass, plastic and metal. The strength of this approach—training, instead of telling—comes from generalization. Once I’ve trained a dog to detect narcotics, using some well-labelled training examples, it can apply that skill to a wide range of new situations in the real world. 

One big weakness of this approach—as any animal or machine trainer will tell you—is that training the behavior you want takes a lot of time and effort. Historically, machine trainers have spent months, years, and sometimes decades of their lives manually converting mountains of raw data into millions of clear labels that machines can learn from—labels like “This is a narcotic” and “This is also a narcotic”, or “This is a glass bottle” and “This is a plastic bottle.” 

Computer scientists call this training burden “technical debt.” 

I like the term. It’s intuitive. You’d like to buy that mansion, but even if you had enough money for the down-payment, the service charges on the mortgage would cripple you. Researchers and companies look at many Machine Learning projects in much the same light. Machine Learning models look pretty. They promise a whole new world of automation. But you have to be either rich or stupid to saddle yourself with the burden of building and maintaining one.

Another big weakness of the approach is that, to train the machine (or the animal), you need to know in advance the behavior that you want to reward. You can train a dog to run into the bushes and bring back “a bird.” But how would you train a dog to run into the bushes and bring back “something interesting”???

From Animal Trainers to Maze Architects

In 2006, Geoff Hinton (YouTube) and his AI research team at the University of Toronto published a seminal paper on something they called “Deep Belief Networks”. It helped spark a new subfield of Machine Learning called Deep Learning. 

If Machine Learning is the computer version of training animals, then Deep Learning is the computer version of sending lab rats through a maze. Getting an animal to display a desired behavior in response to a given stimulus is a big job for the trainer. Getting a rat to run a maze is a lot easier. Granted, designing and building the maze takes a lot of upfront effort. But once that’s done, the lab technician can go home. Just put a piece of cheese at one end and the rat at the other, and the rat trains itself, through trial-and-error, to find a way through. 

This “Deep Learning” approach has now been used to produce lab rats (i.e., algorithms) that can run all sorts of mazes. Clever lab technicians built a “maze” out of Van Gogh paintings, and after learning the maze the algorithm could transform any photograph into the style of Van Gogh. A Brooklyn team built a maze out of Shakespeare’s entire catalog of sonnets, and after learning that maze the algorithm could generate personalized poetry in the style of Shakespeare. The deeper the maze, the deeper the relationships that can be mimicked by the rat that runs through it. Google, Apple, Facebook and other tech giants are building very deep mazes out of our image, text, voice and video data. By running through them, the algorithms are learning to mimic the basic contours of human speech, language, vision and reasoning—in more and more cases, well enough that the algorithm can converse, write, see and judge on our behalf. (Did you all seen the Google Duplex demo last week?)

There are two immediate advantages to the Deep Learning approach—i.e., to unsupervised, trial-and-error rat running, versus supervised, stimulus-response dog training. The obvious one is that it demands less human supervision. The “technical debt” problem is reduced: instead of spending years manually labelling interesting features in the raw data for the machines to train on, the rat can find many interesting features on its own. 

The second big advantage is that the lab rat can learn to mimic more complicated, more efficient pathways than a dog trainer may even be aware exists. Even if I could, with a mix of rewards and punishments, train a dog to take the path that I see through the maze, Is it the best path? Is it the only path? What if I myself cannot see any path through the maze? What if I can navigate the maze, but I can’t explain, even to myself, the path I followed to do it? The maze called “human language” is the biggest example of this. As children, we just “pick up” language by being dropped in the middle of it. 

 

So THAT’S What They’re Doing

No one seems to have offered this “rat in a maze” analogy before. It seems a good one, and an obvious one. (I wonder what my closest AI researcher-friends think of it—Rob, I’m talking to you.) And it helps us to relate intuitively with the central challenge that Deep Learning researchers (i.e., maze architects) grapple with today:

Given a certain kind of data (say, pictures of me and my friends), and given the useful behavior we want the lab rat to mimic (say, classify the pictures according to who’s in them), what kind of maze should we build? 

Some design principles are emerging. If the images are full color, then the maze needs to have at least three levels (Red, Green, Blue), so that the rat learns to navigate color dimensions. But if the images are black-and-white, we can collapse those levels of the maze into one. 

Similarly, if we’re dealing with data that contains pretty straightforward relationships (say, Column A: historical data on people’s smoking habits and Column B: historical data on how old they were when they died), then a simple, flat maze will suffice to train a rat that can find the simple path from A to B. But if we want to explore complex data for complex relationships (say, Column A: all the online behaviors that Facebook has collected on me to-date and Column B: a list of all possible stories that Facebook could display on my Newsfeed today), then only a multi-level maze will yield a rat that can sniff out the stories in Column B that I’d click on. The relationship between A and B is multi-dimensional, so the maze must be multi-dimensional, too. Otherwise, it won’t contain the path. 

We can also relate to other challenges that frustrate today’s maze architects. Sometimes the rat gets stuck in a dead-end. When that happens, we either need to tweak the maze so that it doesn’t get stuck in the first place, or teleport the rat to some random location so that it learns a new part of the maze. Sometimes the rat gets tired and lazy. It finds a small crumb of cheese and happily sits down to eat it, not realizing that the maze contains a giant wheel of cheese—seven levels down. Other times, the rat finds a surprising path through the maze, but it’s not useful to us. For example, this is the rat that’s been trained to correctly identify any photograph taken by me. Remarkable! How on earth can it identify my style…of photography!?! Eventually, we realize that my camera has a distinctive scratch on the lens, which the human eye can’t see but which the rat, running through a pixel-perfect maze, finds every time. 

 

Next Steps

These are the analogies I’m using to think about different strands of “AI” at the moment. When the decision rules (the how) are clear and knowable, we’ve got cooks following recipe books. When the how isn’t clear, but what we want is clear, we’ve got animal trainers training machines to respond correctly to a given input. And when what we want isn’t clear or communicable, we’ve got maze architects who reward lab rats for finding paths to the cheese for us. 

In practice, the big AI developments underway use a blend of all three, at different stages of the problem they’re trying to solve.

The next question, for a future letter, is: Do these intuitive (and imperfect) analogies help us think more clearly about, and get more involved in, the big questions that this technology forces society to confront. 

Our experience with “expert systems” taught us to understand, and appreciate, how human experts make decisions more fully. Will our experience with “artificial intelligence” teach us to understand and appreciate human intelligence more fully?

Even the most hyped-up AI demonstration right now—Google Duplex—is, essentially, a lab rat that’s learned to run a maze (in this case, the maze of verbal language used to schedule appointments with other people). It can find and repeat paths, even very complicated ones, to success. Is that the same as human intelligence? It relies entirely upon past information to predict future success. Is that the same as human intelligence? It learns in a simplified, artificial representation of reality. Is that the same as human intelligence? It demands that any behavior be converted into an optimization problem, to be expressed in numerical values and solved by math equations. Is that the same as human intelligence? 

At least some of the answers to the above must be “No.” Our collective task, as the generations who will integrate these autonomous machines into human society, to do things and make decisions on our behalf, is to discover the significance of these differences. 

And to debate them in terms that invite non-specialists into the conversation.

 

Brave voyages,

Chris

Map #30: Violence, Conformity and a Toronto Van Attack

Map #30: Violence, Conformity and a Toronto Van Attack

I was going to write about something completely different this week. Then I saw the news headline that a van had plowed through a mile of sidewalk in Toronto, killing nearly a dozen people and injuring dozens of others. It suddenly felt wrong to continue with the work I had been doing, and it felt right to meditate on the meaning and the causes of what had just happened.

That profound sense of wrongness was itself worth thinking about. Here I am in London, England, staring out my window at another wet spring morning. When, last year, a van plowed into pedestrians on London Bridge, or when a car plowed through pedestrians outside the Houses of Parliament, or when a truck drove through a Berlin Christmas market, or through a crowd of tourists in Nice, I read the headlines, I expressed appropriate sympathy, and then I went on about my business.

In my letter a couple weeks ago, I shared a quote from Jonathan Sacks, who said that, “The universality of moral concern is not something we learn by being universal but by being particular.” It’s when violence shatters the lives of my fellow Canadians that I am deeply touched by the inhumanity of the act. I understand these words better, now.

To reach some deeper insights into what happened in Toronto, and into similar events in the past and yet to come, I picked up The Human Condition (1958), by Hannah Arendt. Hannah Arendt (1906-1975) was one of the biggest political thinkers of the 20th century. From her first major book in the 1950s onward, she tried to make sense of a lot of the same things that we are all trying to wrap our heads around today: the rise of authoritarian regimes, the consequences of automation, the degradation of our politics, and the tensions between what consumer society seems to offer us and what we actually need to be fulfilled by life.

I came away from her book with some helpful insights about three or four big topics in the public conversation right now.

I’ve only got space here to reflect on last week’s van attack. But the biggest takeaway for me from this book was that, whatever horror hits the day’s headlines, if we can, through these events, grow our understanding of the human condition, then we will go to sleep stronger than when we woke up in the morning.

And that, at least, is something positive.

Brave voyages, to us all,

Chris

 

Hannah’s Big Idea in 500 words

To grasp Hannah’s insights into our present-day headlines, we need to give her the courtesy of 500 words to tell us her big idea.

In a nutshell, her big idea is that all human doing falls into one of three buckets: labor, work and action. Now for most of us, these three words kinda mean the same thing; their definitions blur and overlap. But Hannah says, No, no, no, the differences between these words mean everything. The better we grasp their distinctions, the more clearly we will grasp the human condition—and, by extension, why everything is happening.

Work is like craftsmanship or art. We are at work when, like the sculptor, we start with an idea and turn it into reality. Work has a definite beginning and end. When the work is over, we have added something new to the world.

To Hannah’s mind, the Industrial Revolution largely destroyed work, destroyed craftsmanship, so that today it’s really only artists who experience the intrinsic value of having made something.

In the modern world, most of us don’t work anymore. Instead we labor. Labor has neither a beginning nor an end. It is an unending process of producing stuff that is consumed so that more stuff can be produced and consumed. Most labor works on only a piece of the whole, with only a faint grasp of that whole, and only a faint or no intrinsic satisfaction for having contributed to that whole. As laborers, we do not make things; we make a living. And as the cliché goes: when we’re old and grey and look back on our lives, we won’t remember the time we spent at the office. Why? Because that time was largely empty of intrinsic value; it was empty of stuff worth remembering. (Hannah could be a bit dark at times.)

Action is for Hannah, the highest mode of human doing. To act is to take an initiative. To begin. To set something in motion. Unlike work, which is the beginning of something, action is the beginning of someone. It is how we distinguish ourselves and make apparent the plurality of human beings. If our labor reveals what we are (lawyer, banker, programmer, baker), then our actions reveal who we are. Through our words and deeds, we reveal our unique identity and tell our unique story. Action has a beginning, but it has no end—not one that we can see, anyway, because its consequences continue to unfold endlessly. (In her glorification of speech and action, Hannah let’s slip her love affair with ancient Greece. More on that later.)

In short, for Hannah the whole human condition can be understood in the distinctions and conflicts that exist between labor, work and action. Through that lens, I think she would say the following about the Toronto van attack.

 

This Toronto man was reaching for immortality

Hannah, a Jew, quit Nazi Germany in 1933. She knew a lot about the horrors of violence, she studied violence, and she strove to understand it.

Hannah, I think, would have zeroed in on the driver’s desire to commit “suicide by cop,” and the consequence of his failure to do so. She wrote:

The essence of who somebody is can come into being only when life departs, leaving behind nothing but a story. Whoever consciously aims at being “essential,” at leaving behind a story and an identity which will win “immortal fame,” must not only risk his life but expressly choose (as, in Greek myth, Achilles did) a short life and premature death. 

Only a man who does not survive his one supreme act remains the indisputable master of his identity and possible greatness, because he withdraws into death from the possible consequences and continuation of what he began. 

But, because of the restraint shown by the arresting officer, the man was denied the privilege of writing his own ending. He remains alive to face the unfolding consequences of “his one supreme act.” Instead of summing up his whole life in that one single deed, his story will continue to unfold, piecemeal. With each fresh, unintended page, he will be less the author, and more the witness, to his own place in history. He sought to win immortal fame. Instead he will live, be locked away, and be forgotten.

Those who feel weak, get violent

What about this “incel” movement—this involuntary celibacy stuff—which seems to have inspired the man’s rampage? Hannah wrote:

The vehement yearning for violence is a natural reaction of those whom society has tried to cheat of their strength.

Hannah thought of strength as something that individuals possess. She distinguished strength from power, which is something that people possess—but only when they act together. In a contest between two individuals, strength (mental or physical) decides the winner. In a contest between two groups, power—not numbers—decides the winner. (That’s why history is full of examples of small but well-organized groups of people ruling over giant empires.)

But in a contest between individual strength and collective power, collective power always wins. We saw this truth in the aftermath of the Toronto attack: the public’s coming together in the wake of the driver’s rampage showed just how helpless he is, whatever weapon he might wield, to change the way things are.

We’ve also see this truth on a larger scale, Hannah argued, in “passive resistance” movements like that of Mahatma Gandhi.

Popular revolt against strong rulers can generate an almost irresistible power—even if it foregoes the use of violence in the face of vastly superior material forces….

To call this “passive resistance” is certainly ironic; it is one of the most powerful ways of action ever devised. It cannot be countered by fighting—the other side refuses to fight—but only by mass slaughter (i.e., violence). If the strong man chooses the latter, then even in victory he is defeated, cheated of his prize, since nobody can rule over dead men.

Hannah’s point is, individual strength is helpless against collective power. For some groups, that’s been the secret to liberation: liberation from the strongman, the tyrant, the dictator. For some individuals, that’s been the explanation for their imprisonment: imprisonment to social norms, to shifting values, to their own sense of impotence.

How do we build a society of healthy individuals?

“Individual strength is helpless against collective power.” If Hannah was right about that, then the big question we need to ask ourselves is: How do we build a society of healthy individuals—a society that doesn’t suffocate, but instead somehow celebrates, individual strength?

To be sure, “involuntary celibates” who rage against their own frustrations are maladjusted, and they need to bear their own guilt for that. But, Hannah would argue, explosions of violence in our midst also remind us that the power of the group to make us conform is awesome.

So how do we each assert our uniqueness within society? It’s a deadly serious question.

 

The Greek solution

For Hannah, who saw only three basic choices in front of each of us—labor, work or action—the only hope for modern man and woman to assert their individuality lay in the last: action. Labor is the activity that deepens our conformity, until even our basic biological choices of when to sleep and when to eat are set by the rhythm of the economic machine. And the industrial revolution destroyed whatever private satisfactions once existed in the craftsman’s “work”.

So we’re left with action. And, Hannah mused, we’ve got two choices: either we create a public arena for action to be put on display, or people will create it on their own. The Toronto van attacker did the latter.

The ancient Athenians did the former. Their public arena for action was the political arena, the polis. In our modern economic mind, we tend to think of the political arena as an unproductive space where “nothing gets done.” But to the ancient Athenians, it was the primary public space where individual achievements were asserted and remembered.

The polis was supposed to multiply the occasions to win “immortal fame,”—that is, to multiply the chances for everybody to distinguish himself, to show in deed and word who he was in his unique distinctness. 

The polis was a kind of “organized remembrance,” so that “a deed deserving fame would not be forgotten.” Unlike the products of human labor and work, the products of human action and speech are intangible. But through the polis, they became imperishable.

For ancient Athenians, the political arena squared the circle. It transmuted society’s awesome power to force conformity into a shared celebration of individual strength. Or, as the philosopher Democritus put it in the 400s BC:

“As long as the polis is there to inspire citizens to dare the extraordinary, all things are safe; if it perishes, everything is lost.” 

 

How do WE square the circle?

Unfortunately, when it comes to the healthy assertion of individuality today, we modern people have painted ourselves into a corner. And we keep painting it smaller.

Politics isn’t a viable place for us to immortalize deeds anymore, because (in a complete reversal of how the ancient Greeks saw it) politics produces nothing for consumption—and is therefore unproductive.

But our primary productive activity—labor—is, likewise, empty of distinguishing deeds that we might want to immortalize. (Again, that seems to be the truth we all admit on our deathbeds.)

Maybe one day soon, when automation frees us from the need to perform any labor at all, we will then use our abundant time to develop our individual excellences. Hannah had grave doubts about that. More likely, having already formed the habit of filling our spare time with consumption, we will, given more spare time, simply consume more—more food, more entertainment, more fuel for our appetites.

The danger is that such a society, dazzled by its own abundance, will no longer be able to recognize its own emptiness—the emptiness of a life which does not fix or realize anything which endures.

Even our “private lives”—where we should feel no need to assert our individuality against the group at all—are now saturated with the need to perform for others.

The conclusion, which would be really funny if it weren’t so serious, is that for all our runaway individualism, it may be that modern society suffers from a crisis of individuality.

 

Hold onto the dilemma

Hannah didn’t solve these problems for us. (I guess that’s why she called this book The Human Condition.) But she did frame two powerful questions that can help us respond meaningfully to senseless acts of violence:

  1. How can I take part in the power of people acting together?
  2. What is the arena in which I celebrate individual distinctiveness—mine, and others?

This dilemma isn’t going away. ’Squaring this circle’ between social conformity and individual strength is one of the big, underlying projects of our time.

 

Map #29: False Choices (‘Should Science Or Values Take Priority?’) 

Map #29: False Choices (‘Should Science Or Values Take Priority?’) 

I Hate False Choices

As many of you know, I’m actively exploring routes to re-engage with my Canadian…er…roots. One of those routes is a policy fellowship program. I recently went through the application process, which included responding to the following question:

 In policy making, science and evidence can clash with values and perspectives. What should take precedence and why?

False choices like this one are a special hate of mine. So I thought I’d share my brief rant with you all. (If you share my special hate, or hate my special rant, let’s chat!)

</begin rant>

The premise that “science and evidence” stand in opposition to “values and perspectives” is fatal to the project of liberal democracy.

The ultimate consequence of this premise is precisely the crisis that we now face in our politics today—namely, the emergence of new, competing truth machines to support value-based policy agendas that were consistently denied validity by the truth machine of “science and evidence.”

This error goes all the way back to the Enlightenment, when we separated science from values, elevated science to the status of higher truth and gave science a privileged, value-free position from which to survey the unenlightened.

That act of hubris planted the seed for the present-day rebellion by every value that is “held” (and therefore is real at some level) yet denied the status of reality.

So long as we frame this tension between science and values as a clash, as an either/or that must be decided in favor of one side or the other, this rebellion will spread until it enfeebles, not just policy-making, but the whole liberal democratic project.

If science and evidence ought to take precedence, then logic will ultimately lead us to the China model of “democratic dictatorship.” There, the people chose a government in 1949 (via popular revolution), and it has been running policy experiments and gathering evidence ever since. Some experiments have been successful, some spectacularly not-so, but the Party retains the authority to learn, to adapt and to lead the people, scientifically, toward a material utopia. By force, when necessary.

If, instead, values and perspectives ought to take precedence, then far from wringing our hands at the proliferation of “fake news” and “alternative truths,” we should celebrate it. Now every value, not just Enlightenment values, has a truth machine that spews out authority, on the basis of which groups that hold a particular value can assert righteousness. Excellent! But then we ought to strike the “liberal” from liberal democracy, since constraints upon the exercise of authority have no privileged ground to stand on.

The only way to avoid these inevitable destinations of the either/or premise is to reintegrate science and value—at the level of policy-making and public discourse. We need a New Enlightenment. That is the task which the irruption of post-truth politics now makes urgent. To accomplish it, the Enlightened must first wake up to the values and intuitions that underlie our questing for evidence. For example: a sense that harmony with nature or with strangers is good; a conviction that we ought to preference the welfare of future generations over our own; a feeling that security is a good in itself.

We, the Enlightened, must reverse the order in which we validate our values—from “Here is the evidence, and therefore this is what we should value” to “Here is what I value, and here is the evidence that helps explain why it’s good.”

</end rant>