A while back, Yuval Noah Harai wrote a book out called “Sapiens” where he discusses how mankind came to dominate our planet. It is not a new story, but it caused a bit of a stir.
He has a new book out called Homo Deus: Ezra Klein interviewed Harari for Vox. He starts out with this general idea.
Homo Deus: a Brief History of Tomorrow, is about what comes next for humanity — and the threat our own intelligence and creative capacity poses to our future.
Before discussing the new book, we should understand one thing about it. Because the book is about the future, no one can say that it is correct in any sense of the word. That is because the future is inherently beyond what we know as “true”. It has not happened yet. We can talk about probabilities, but not actualities. In that sense, it is by definition “incorrect”. And therefore, talk about the future is inerehtly strategic rather than descriptive.
Having said that, Harari’s claim is that it is highly probable that the domination of the planet by humans will end over the next 300 years.
Aha! Notice the shock value of the claim. Harari is being very clever here. He knows that to get and hold our attention, he needs to lead with an idea that runs counter to our intuition – something that stimulates a bit of fear. It is the anchoring tool that has gotten humans excited for as long as we have been sharing ghost stories.
So what is this loss of planet dominance all about? The answer has less to do with how we will be able to produce amazing new technologies that will kill us. It has more to do with Harari’s sense that humans are pretty shitty creations in the first place. Super-intelligence will be far more efficient at doing what we can do.
Super-intelligence — achieved through highly advanced and interconnected computing devices — will make humans obsolete — and good riddance to bad trash! He says
What we are talking about in the 21st century is the possibility that most humans will lose their economic and political value. They will become a kind of massive useless class — useless not from the viewpoint of their mother or of their children, useless from the viewpoint of the economic and military and political system. Once this happens, the system also loses the incentive to invest in human beings.
BTW, Harari is not saying that AI will become conscious. That will not be necessary. It will continue acting according to patterns that humans initially created, but in ways that are vastly more intelligent than humans could do so. He says
There is a lot of confusion about what artificial intelligence means or doesn’t mean, especially in places like Silicon Valley. For me, the biggest confusion of all is between intelligence and consciousness. Ninety-five percent of science fiction movies are based on the error that an artificial intelligence will inevitably be an artificial consciousness. They assume that robots will have emotions, will feel things, that humans will fall in love with them, or that they will want to destroy us. This is not true.
Hmmm … but isn”t the human trump card our ability to cooperate? To exchange information between humans?Harari has a pretty interesting answer to that one
… for success, cooperation is usually more important than just raw intelligence. But the thing is that AI will be far more cooperative, at least potentially, than humans. To take a famous example, everybody is now talking about self-driving cars. The huge advantage of a self-driving car over a human driver is not just that, as an individual vehicle, the self-driving car is likely to be safer, cheaper, and more efficient than a human-driven car. The really big advantage is that self-driving cars can all be connected to one another to form a single network in a way you cannot do with human drivers.
Damn! Trumped again! Consider the practice of medicine.
If you think about medicine, today you have millions of human doctors and very often you have miscommunication between different doctors, but if you switch to AI doctors, you don’t really have millions of different doctors. You have a single medical network that monitors the health of everybody in the world.
And the danger?
The whole attraction of machine learning and deep mind and AI for the people in the industry is that the AI can start recognizing patterns and making decisions in a way that no humans can emulate or predict. That means we have no ability to really foresee where the AI will develop. This is part of the danger. The scenarios in which AI goes beyond human intelligence are, by definition, the scenarios that we cannot imagine.
And if AI can do anything a human can do but better, there could come a time when it is no longer efficient to invest in producing more humans. Yikes!
Not only that!
… the other problem with AI taking over is not the economic problem, but really the problem of meaning — if you don’t have a job anymore and, say, the government provides you with universal basic income or something, the big problem is how do you find meaning in life? What do you do all day?
Indeed. Harari thinks that humans will be managed through our absorption in meaningless games.
So, how about it? Are we headed for the scrap heap of history, you and I … errrr … assuming that you (the entity reading this) are a human?
I agree with Hariri in one respect. I agree that certain types of human behavior are likely going to become less and less “mainstream”. This is nothing new. In ancient times, human males, like other mammals, fought each other for tribal dominance. That was considered normal — even a positive thing to ensure the continuation of the best genes. Tribal leadership these days, does not require physically duking it out in the town square. Nor do alpha males lay claim to the virgins of their choice and first dibs on the choicest cuts of meat on the table. We are more “civilized”, right? Some of us are even vegan!
So what types of human behavior are likely to become less valuable in the future? In a world of economic plenty, human competition over access to resources should not require as much of our attention as it does now. Errr … translating that into our current real world vocabulary, the notion of accumulating billions of dollars as a life ambition will look a bit silly. Why go to the trouble?
So what will be “worth the trouble”? What will motivate us? Dan Pink addresses this question in an interesting way in his book “Drive”. He did a fun TED talk about it too.
In other words, there are plenty of things that will absorb us. And at the end of the day, the question that is likely to absorb us most of all is the question that has always absorbed us. We will obsess over how do we decide the type of future that we want. Our collective future is the one thing that by definition is greater than we are. It is something that might be “mastered”. And pursuing a better future gives us a reason to value “autonomy” in decision making. BTW, those are the trinity of Dan’s motivation categories.
As a species, our future focus has meant obsessing over the need for certainty that religion provided. That served us well for thousands of years. Later it meant obsessing over the possibilities of using reason to improve our future. You might say, we moved from a God-centered universal view to a man-centered one. We are still rather enthralled by this obsession.
This is why some of us have been a bit taken aback when they have been informed about how huge and complex the universe is compared to our existence here on our tiny planet. We, as humans, are only a microscopic part of the universal. Damn! Just when we thought we were the center of all things, it turned out that we are not. Tant pis”
As we go forward, I think we will get over that. We will get over the idea that mankind has to be dominant in order to be anything. We are more likely to embrace the notion that the future is the most interesting thing in the universe. The most interesting, because it is the only thing that we can affect. And we affect it by “adding value”. If so, it is not unlikely that humans will grow more interested — and indeed obsessed — over the question of how we can add value over time. Adding value will be cool! Not adding value will be gross!
Consider, how much of what you do “adds value” now. Are you simply a “consumer” (as economists would say)? Or do you “produce” anything worth sharing? In the old days, our value added came from work – physical work – most of the time. In the future, we will be free to add value in other, more creative ways.
In Harari’s future, systems will not tolerate this sort of human self-indulgence. But notice that Harari assumes something about the future. He assumes that systems will dominate individuals. That, it seems to me, is the most dystopian aspect of his thought. The fascism of the future?
Why should systems dominate? Why not a “new birth of freedom”? Indeed, I wonder why this thought seems so foreign to good old Harari. It is less so to me.