One of the hottest stories here in China last week received far less coverage outside the country: the Human-vs-Machine Go Showdown, between the world’s top Go player, Ke Jie, and Google’s AlphaGO AI machine.

I’ve met Gary Kasparov a few times, and I’ve heard first-hand his story of facing off across the chess board against IBM’s Deep Blue in 1997. (Short version: it was stressful). This was a rematch of sorts, with different champions on both sides and a new game-board between them: Go, or weiqi as it’s known here. Weiqi (literally, “encirclement chess”) is a simple-looking game in which opponents take turns laying their stones (white or black) on a board comprising 19 horizontal and 19 vertical lines. Whoever encircles more territory, wins. It’s simple to play, but hard to win.

In chess, each game begins with all the pieces in the same position, and each turn offers a handful of legal moves. As the game progresses, pieces are removed and the game’s future simplifies. But in Go, the game begins with an empty board, and each turn presents hundreds of choices. As the game progresses, pieces are added and the endgame becomes increasingly complex.

Deep Blue was able to defeat Kasparov through brute processing strength: by looking at the board and crunching through all possible scenarios, two- or three-dozen moves into the future. But this approach fails with Go. Instead, Google’s AlphaGo applies recent advances in machine learning and neural networks to look at the board less like a computer does, and more like a human would, to evolve its goals and strategies as the game progresses.

In the midst of the three-day Go tournament, I gave an interview to a major Chinese science magazine. Here’s a (lightly edited) excerpt. Hope you find it provocative…

Who is going to win: the human or the machine?

History has shown us that whenever we teach a computer a human skill, it very quickly becomes better at that skill than any human.

The first computers that could do math were slower than a skilled human. It took a long time to program the computer to add, subtract, multiply or divide. The human engineer with his slide rule could do the math much faster. But within a few years, the computer could complete thousands of calculations in the time it took a human to complete one. Today, it can perform billions.

The very first computers had limited memory. And memory was very expensive. Today’s computers can remember everything that all humanity writes, sees and does.

The very first chess computers could play only basic, pre-programmed sequences. By the mid-1990s, IBM’s Deep Blue was able to beat the world champion),

Whenever we teach a computer a human skill, it quickly becomes more skillful at that skill than any human ever was, or ever will be. That is because the computer has almost unlimited time, patience, stamina, and memory. In the time it takes us to play one game, it can play, and learn from, billions. While we are sleeping, it is practicing.

Whether or not AlphaGo wins next week is not the point. The moment we taught it to play Go, we guaranteed its eventual victory. What will we teach the machines to do next?

Should we fear the growing power of AI, or welcome it?

Steve Jobs used to say, “The computer is a bicycle for the brain.” What did he mean by this? If you take all the animals in the animal kingdom and rank them by how efficiently they travel (calories burned per kilometer), then humans rank quite low. All the birds rank first, followed by four-legged animals, and so forth. But a human on a bicycle ranks far ahead of even the most efficient bird.

Just as a bicycle makes human travel more efficient, computers make the mental work that humans do more efficient. They help us to remember, to compute, to do repetitive tasks. AI promises the same benefit, except that it can help us be more efficient at higher-order mental tasks: reasoning, judging, creating, and so on.

In this way, AI will bring great good to humanity. It will help us make better decisions, and start making many decisions for us. One example that’s easy for everyone to understand is driving a car. Human drivers can be impaired, or distracted, or emotional, we can lose focus because the task of driving a car, while complex, can also become monotonous. The AI has better eyesight and reflexes than we possess, and it is always calm, alert and focused on driving the car, while constantly watching out for other cars and accidents. Already, AI has proven to be a much safer driver than humans. In a similar way, AI can help us do a better job flying aircraft, buying and selling investments, or policing public spaces.

These are some of the early, obvious uses. As AI becomes more common and affordable, it may take over many decisions in our daily lives, such as what food to buy this week, or which people to visit and what we should say to them. It may also take over many white-collar jobs (e.g., most corporate research, which is mainly a bunch of decisions about what information is important and what information isn’t).

A bicycle reduces the time and energy we spend traveling, and leaves us more time and energy to do things at our destination. AI will reduce the time and energy we spend making many routine judgments and decisions, and will leave us more time and energy to do meaningful, creative work. That can be a very good thing.

But AI will also create some tough problems. The obvious one is: a lot of people are going to lose their jobs. That will create a lot of stress and anxiety. It will make some rich people even richer, and it will cause some middle-class people to fall into poverty—unless we first figure out how to share these costs and benefits more fairly across society.

The deeper danger is that, if AI takes control over more and more of the decisions in our lives, will we become less free? Already, Google’s search engine decides for me which search results it thinks I, Chris Kutarna, will want to see, and shows me those. But what about the search results that it doesn’t show me? How can I disagree with a decision that gets made without my knowledge? Now, fast forward to the future and imagine a sales AI that knows my personality and can perfectly manipulate me to buy a vacation to, say, Brazil. And I go to Brazil and I do have a great time. I thought that I had made this decision myself; but actually I was manipulated by a travel agency, which profited from my “decision”. Or imagine a political AI that perfectly manipulates people to vote a certain way, or to protest a certain action.

The power to help us make decisions and the power to manipulate us into making certain decisions is the same power. The only difference is who controls the AI. And that’s the danger.