zenpundit.com » Blog Archive » Go googled, GBG still to go: 1

Go googled, GBG still to go: 1

[ by Charles Cameron — games, games, games — & prepping a challenge for AI, the analytic community & CNA ]
.

playing go
Playing go, Hasegawa, Settei, 1819-1882, Library of Congress

**

In the past, computers have won such games as Pong and Space Invaders:

Google’s AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own. Previously, founder Demis Hassabis and his team had used these techniques in building systems that could play classic Atari videos games like Pong, Breakout, and Space Invaders. In some cases, these system not only outperformed professional game players. They rendered the games ridiculous by playing them in ways no human ever would or could. Apparently, this is what prompted Google’s Larry Page to buy the company.

Wired, Google’s Go Victory Is Just a Glimpse of How Powerful AI Will Be

I can’t corral all the games they’ve played into a single, simple timeline here, because the most interesting discussion I’ve seen is this clip, which moves rapidly from Backgammon via Draughts and Chess to this last few days’ Go matches:

Jeopardy should dfinitely be included somewhere in there, though:

Facing certain defeat at the hands of a room-size I.B.M. computer on Wednesday evening, Ken Jennings, famous for winning 74 games in a row on the TV quiz show, acknowledged the obvious. “I, for one, welcome our new computer overlords,” he wrote on his video screen, borrowing a line from a “Simpsons” episode.

NYT, Computer Wins on ‘Jeopardy!’: Trivial, It’s Not

What’s up next? It seems that suggestions included Texas Hold’em Poker and the SAT:

Artificial intelligence experts believe computers are now ready to take on more than board games. Some are putting AI through the ringer with two-player no-limit Texas Hold’ Em poker to see how a computer fairs when it plays against an opponent whose cards it can’t see. Others, like Oren Etzioni at the Allen Institute for Artificial Intelligence, are putting AI through standardized testing like the SATs to see if the computers can understand and answer less predictable questions.

LA Times, AlphaGo beats human Go champ for the third straight time, wins best-of-5 contest

And of course, there’s Rock, Paper, Scissors, which you can still play on the New York Times:

Rock Paper Scissors

**

Now therefore:

In a follow-up post I want to present what in my view is a much tougher game-challenge to AI than any of the above, namely Hermann Hesse‘s Glass Bead Game, which is a major though not entirely defined feature of his Nobel-winning novel, Das Glasperlenspiel, also known in English as The Glass Bead Game or Magister Ludi.

I believe a game such as my own HipBone variant on Hesse’s would not only make a fine challenge for AI, but also be of use in broadening the skillset of the analytic community, and a suitable response also to the question recently raised on PaxSIMS: Which games would you suggest to the US Navy?

As I say, though, this needs to be written up in detail as it applies to each of those three projects — work is in progress, see you soon.

**

Edited to add:

And FWIW, this took my breath away. From The Sadness and Beauty of Watching Google’s AI Play Go:

At first, Fan Hui thought the move was rather odd. But then he saw its beauty.

“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.

The move in question was the 37th in the second game of the historic Go match between Lee Sedol, one of the world’s top players, and AlphaGo, an artificially intelligent computing system built by researchers at Google.

Now that’s remarkable, that gives me pause.

5 Responses to “Go googled, GBG still to go: 1”

  1. Grurray Says:

    Fortes fortuna iuvat
    Sometimes humans don’t make “human moves” either. We call it luck. The computer threw a screw ball and got lucky.

  2. Charles Cameron Says:

    You’re guessing?
    .
    I think the machine may have considerable smarts, though I wouldn’t attribute consciousness to it.

  3. Grurray Says:

    Our friend Scott McWilliams tweeted another article about this subject a week or two ago
    .
    http://www.wired.com/2016/03/googles-ai-viewed-move-no-human-understand/
    .
    “Drawing on its extensive training with millions upon millions of human moves, the machine actually calculates the probability that a human will make a particular play in the midst of a game. “That’s how it guides the moves it considers,” Silver says. For Move 37, the probability was one in ten thousand. In other words, AlphaGo knew this was not a move that a professional Go player would make.
    .
    But, drawing on all its other training with millions of moves generated by games with itself, it came to view Move 37 in a different way. It came to realize that, although no professional would play it, the move would likely prove quite successful. “It discovered this for itself,” Silver says, “through its own process of introspection and analysis.””
    .
    The question is did it think the move would be successful because it was a good move in itself and humans just hadn’t discovered it in their collective experiences. If so, it was another step towards quantifying all experience by brute force calculations.
    .
    Or would the move be successful because it was so shocking and employing it would be operating inside the adversaries OODA loop. Making the move would, in Boyd’s words:

    “Employ a variety of measures that  interweave menace, uncertainty, mistrust  with tangles of ambiguity, deception, novelty as basis to sever adversary’s  moral ties and disorient”
    .
    I don’t know which it was, but the latter option would definitely be more interesting. It certainly had that result on the other player.

  4. Grurray Says:

    And if it was the latter Boyd option, then it was a human move. So very, very human.
    It would confirm my worst fears. Not that AI is getting so smart that the machines will take over, but our leaders and experts are getting so dumb that the machines will take over.

  5. Charles Cameron Says:

    Thanks, Grurray, and Mac too — much to chew on here.
    .
    I’m not even sure how the language currently available to us stands up when straining to think coherently about the situation. You can imagine how queasy reading “another step towards quantifying all experience” makes me! And while I like the Boydian idea, it doesn’t seem to me that the process of playing difrerent variants of AlphaGo against each other would manage to factor in that effect.
    .
    As for “its own process of introspection and analysis” — “analysis” ruffles my verbal sense less than “introspection” here, but the choice of “introspection” really does seem pretty anthropomorphic in an unsettling way, implying to my ear a subjectivity that”analysis” doesn’t. I appreciated that final paragraph, in which the writer seems to have picked up on that same feeling:

    Is introspection the right word? You can be the judge. But Fan Hui was right. The move was inhuman. But it was also beautiful.

    Fascinating.


Switch to our mobile site