Thursday, December 28, 2017

Self-Teaching Chess Algorithms: Is This Checkmate for Humanity? - by Gary North

From KurzweilAI:
Demis Hassabis, the founder and CEO of DeepMind, announced at the Neural Information Processing Systems conference (NIPS 2017) last week that DeepMind's new AlphaZero program achieved a superhuman level of play in chess within 24 hours.
The program started from random play, given no domain knowledge except the game rules, according to an arXiv paper by DeepMind researchers published Dec. 5.
“It doesn't play like a human, and it doesn't play like a program,” said Hassabis, an expert chess player himself. “It plays in a third, almost alien, way. It's like chess from another dimension.”
AlphaZero's 'alien' superhuman-level program masters chess in 24 hours with no domain knowledge -- https://tinyurl.com/y9lcqy8q
I started programming IBM machines in the late 60s, and at the time there was talk about the possibility of a computer someday beating a human at chess. Almost no one was talking seriously about a computer learning chess on its own, and not merely learning it but mastering it. And mastering it in 24 hours. AlphaZero is mind boggling.
What will AlphaZero be doing in three years? Five? Will we be carrying AlphaZero around in our pockets? Our brains? Will some other AI be the new king of the hill? Will AlphaZero be regarded as quaintly primitive by then? Will Kurzweil's 2029 prediction of a computer passing as human in a Turing test arrive earlier than expected?
And what will humans be like in 2029? Here's a guy working from the other end:
Humans 2.0: meet the entrepreneur who wants to put a chip in your brain -- https://tinyurl.com/gfs543chip
The article he cites begins with this:
Bryan Johnson isn’t short of ambition. The founder and CEO of neuroscience company Kernel wants “to expand the bounds of human intelligence”. He is planning to do this with neuroprosthetics; brain augmentations that can improve mental function and treat disorders. Put simply, Kernel hopes to place a chip in your brain.
It isn’t clear yet exactly how this will work. There’s a lot of excited talk about the possibilities of the technology, but – publicly, at least – Kernel’s output at the moment is an idea. A big idea.
“My hope is that within 15 years we can build sufficiently powerful tools to interface with our brains,” Johnson says. “Can I increase my rate of learning, scope of imagination, and ability to love? Can I understand what it’s like to live in a 10-dimensional reality? Can we ameliorate or cure neurological disease and dysfunction?”
This is a science-fiction scenario. I suppose the best example of this theme in literature is the first Star Trek movie. The movie's theme was based on a theme in the writings of Isaac Asimov. Asimov was a science consultant for the movie. The theme is this: a coming singularity, which will be a fusion of machines and human beings. A new form of life will emerge from this evolutionary development. This is the long sought-after leap of being which motivated alchemists five centuries ago.
The fact that the owner of a self-teaching chess computer program has described the learning process of the algorithms as being alien is indicative of the intellectual framework associated with the thesis of a coming singularity. The possibility of fusing human thought and digital algorithms that are in some way implanted in the human brain is science fiction. It is now being taken seriously by some futurologists.
Obviously, no one knows if this fusion is technologically feasible. The inherent nature of scientific innovation resists forecasts of what is or is not possible. As Arthur C. Clarke said a generation ago, whenever we hear an expert say that something is scientifically impossible, we will probably see that the scientific impossibility comes true.
THE MATHEMATICAL THEORY OF GAMES
A lot of focus is being placed on the digital nature of self-teaching algorithms. This is easily applied to games. This is where the great breakthroughs have been made over the past decade. The rate of accomplishment is now speeding up astronomically. But never forget this: chess is a matter of fixed rules. There are patterns in games of chess that are imposed by these rules. Computer programmers now find that they do not have to teach these rules to algorithms. This is the great breakthrough that has taken place over the last 18 months. The algorithm can survey a huge number of games if the games have been recorded digitally. The algorithm can deduce the rules and then implement strategies in terms of these rules. Here is how an article in Wired described the process.
At one point during his historic defeat to the software AlphaGo last year, world champion Go player Lee Sedol abruptly left the room. The bot had played a move that confounded established theories of the board game, in a moment that came to epitomize the mystery and mastery of AlphaGo.
A new and much more powerful version of the program called AlphaGo Zero unveiled Wednesday is even more capable of surprises. In tests, it trounced the version that defeated Lee by 100 games to nothing, and has begun to generate its own new ideas for the more than 2,000-year-old game.
AlphaGo Zero showcases an approach to teaching machines new tricks that makes them less reliant on humans. It could also help AlphaGo’s creator, the London-based DeepMind research lab that is part of Alphabet, to pay its way. In a filing this month, DeepMind said it lost £96 million last year.
DeepMind CEO Demis Hassabis said in a press briefing Monday that the guts of AlphaGo Zero should be adaptable to scientific problems such as drug discovery, or understanding protein folding. They too involve navigating a mathematical ocean of many possible combinations of a set of basic elements.
Despite its historic win for machines last year, the original version of AlphaGo stood on the shoulders of many, uncredited, humans. The software “learned” about Go by ingesting data from 160,000 amateur games taken from an online Go community. After that initial boost, AlphaGo honed itself to be superhuman by playing millions more games against itself.
RISK VS. UNCERTAINTY
I want to focus on this paragraph:
DeepMind CEO Demis Hassabis said in a press briefing Monday that the guts of AlphaGo Zero should be adaptable to scientific problems such as drug discovery, or understanding protein folding. They too involve navigating a mathematical ocean of many possible combinations of a set of basic elements.
There is a fundamental difference between drug discovery and playing a game. There are no fixed laws of drug discovery. There are no rules governing drug discovery. There are rules of thumb, but there are no formal rules.
The most mathematically sophisticated forms of economic theory rest on game theory. This goes back to the 1944 book by the genius mathematician John von Neuman and economist Oskar Morgenstern. This kind of mathematically sophisticated analysis is beloved by mathematically proficient economists. The methodological problem is this: the theory of games doesn't have any relationship to the real world. Murray Rothbard wrote about this over 40 years ago. Risk is not the same as uncertainty. This fact was presented as early as 1921 by Frank H. Knight in his book, Risk, Uncertainty, and Profit. Ludwig von Mises adopted Knight's analysis to explain the operation of the free market.
The mathematician John Nash won the Nobel Prize in economics based on his theory of decision-making in games. The same problem applies to Nash's theory as applied to the previous book. The theory of games does not apply to the world of decision-making. In most of our decision-making, we face uncertainty, not risk. Uncertainty is inherently unpredictable. We cannot insure against it because the law of large numbers does not apply to uncertainty. It applies only to risk.
When there are patterns of cause-and-effect between certain kinds of behavior and certain undesirable results, such as the relationship between smoking and lung cancer, self- teaching algorithms can be of great benefit. The key to this benefit is the presence of predictable causation. In matters of entrepreneurial forecasting, entrepreneurs are always attempting to find statistical patterns that have not yet been recognized by competing entrepreneurs. In other words, they look for large numbers of events involving risk, and therefore also involving statistical probability, in outcomes that are generally assumed to be inherently uncertain. The quest for regularity is the key to profitability in economic affairs. Some entrepreneurs are able to do this better than others over long periods of time. The classic example is Warren Buffett. But he is the only example well known to economists.
I am all for self-teaching algorithms in pursuit of predictable regularities in what appears to be unpredictable areas of pure chance. I am also in favor of mathematicians who advise insurance companies. They are doing what self-teaching algorithms will probably be able to do within a decade. These self-teaching algorithms are going to be able to identify statistical patterns by accessing huge databases. Medical databases are the obvious places to start. Scientific investigators are going to start here. Insurance companies are going to start here. The promise of self-teaching algorithms is simply an extension of the traditional science of statistics. There is nothing fundamentally new in these algorithms, except this: the algorithms use brute computing power to identify the patterns. Human beings would not be able to identify these patterns without the assistance of these algorithms.
This is one reason why I think there will be major breakthroughs in the treatment of what are now regarded as degenerative diseases. I have in mind cancer. I also have in mind Alzheimer's. I don't see any down side to the development of such algorithms.
Then what about human behavior? Put differently, what about human action? Mises was correct in adopting Knight's distinction between risk and uncertainty. He presented this idea in his book, Human Action (1949).
All this has nothing to do with some looming singularity. It has nothing to do with the evolution of a new species. Why would anybody want a brain implant to enable him to play a superior game of chess? Why not just buy a better program? That sure is a lot cheaper. Yes, a chess player can brag to his friends: "My self-teaching algorithm is a better player than your self-teaching algorithm." But what is the point?
When it comes to discovering patterns in gigantic quantities of digital data, I have no doubt that self-teaching algorithms will be able to do it far better than smart human investigators. That is good news. In no clear-cut way is this a threat to human liberty or human well-being. It will be bad news for the cells that cause cancer, but who cares? Only the really hard-core followers of the religion of Gaia. Frankly, I don't care about their problems, any more than I care about the cancer cells' problems.
Most of human life cannot be successfully digitized. Causation is not digital. It is personal. It is also legally responsible. Most of human life is not governed by the law of large numbers. So, I don't see a major threat to our liberty that is posed by the widespread use of self-teaching algorithms. In any case, if one algorithm gets an advantage in taking away my liberty, I will spend money to find another algorithm that fights back. There will be lots of competition for such algorithms.
Competition among algorithms, even self-teaching algorithms, will lead to greater liberty. I did not perceive this clearly two decades ago, but it has become increasingly clear to me. I would like to think that there is a self-teaching algorithm out there that will enable me to explain this better, but somehow I doubt that there is. That's because I don't think my ability to explain this is dependent upon the law of large numbers. The ability to understand and explain is not the same as the ability to crunch numbers in gigantic databases. In other words, the ability to be victorious in a game governed by rules is not the same as the ability to forecast human behavior accurately, and especially not to forecast it in such a way that you can earn a profit.
CONCLUSION
Risk is not the same as uncertainty. This fact -- and it is a fact -- is fundamental for understanding the debates over the threat of self-teaching algorithms versus the benefits of self-teaching algorithms. The various theories of the threats all rest on this presumption, namely, that human decision-making is fundamentally digital, not analogical and personal. This assumption is wrong. Our entire civilization is built on the presupposition that this assumption is wrong. Our courts of law rest on the assumption of personal responsibility, not digitally determined irresponsibility.
Those analysts who see a great threat to human liberty from self-teaching algorithms deny the analytical distinction between risk and uncertainty, and also deny the analytical distinction between game-playing and entrepreneurship in a world of uncertainty. If this analytical distinction is not a fact, then we really do face the possibility of the singularity. We really do face an evolutionary leap of being. If the newly evolved species is malevolent, then humanity is at risk. I don't think humanity is at risk from self-teaching algorithms. There is uncertainty, but there is no risk. There is no risk because the world is not digital. It is providential. This, of course, is a statement of faith. Those who don't accept this statement of faith had better give careful consideration to the advent of self-teaching algorithms. This is what is bothering Elon Musk. It is also bothering Stephen Hawking. It doesn't bother me. I have a very different view of cosmological causation.
If I live long enough to have a brain chip implant, I'm going to turn down the offer. I'll just buy a computer program instead. It probably won't be a smart phone program. I will still use my dumb phone, but I will have a better desktop computer.