AlphaGo defeats Human Championship Ke Jie in Game 3 to sweep 3-0

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
AlphaGo retires from competitive Go after defeating world number one 3-0

WUZHEN, Zhejiang Province, May 27 (Xinhua) -- AlphaGo, DeepMind's artificial intelligence Go-playing program, defeated world's top-ranked player Ke Jie for the third consecutive game between them in Wuzhen on Saturday.

Ke, playing white, resigned mid-game after battling three and half hours to conclude the Human vs. Machine contest on the Chinese antient board game.

Demis Hassabis, founder of DeepMind Technology, said that it would be the last game for AlphaGo.
The 19-year-old Ke applied similar strategies from Game Two, opening the final match by creating chances of battling from the start of the game, ending with yet another action-packed performance.

Ke teared up nearing the end. He concluded the competition with a heart felt commentary repeating. "AlphaGo is too perfect."

He also expressed that bitterness over defeat will be a driving force to his future journey in exploring the mysteries of Go.

In regards to consolation, Ke first apologized, then blamed himself. Believing that he could have done much better, he said, "I faced a cold, calm and terrifying opponent, to the best of my ability, I could only predict half of AlphaGo's moves. I wish I could have done better."

When asked to share about their past five-day experience with AlphaGo, all eight Chinese Go players that took part in the Go-summit said they've learned a great deal from AlphaGo and DeepMind.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Games that require statistical calculations are easily winnable by a computer. They can produce the entire solution set while a human has to compute a limited set of vectors based on experience and creativity. The computer simply produces the entire solution space and picks the route with the highest probability to win. There's nothing scary about that and it's certainly not evidence of a thinking machine.

As much as I wish we were close to 'real' AI, we aren't. So far, all we've done is make really fast look-up tables with some predictive behaviors that are based primarily on pattern-matching. No one has reproduced or even described the essence of what it means to 'think' like a human and, once that has been done, translating it into a machine will not be an easy task. I went to an AI conference last year and a few people tried to stand up and say we were close with absolutely retarded examples. They didn't even bother with a Q&A because it was obvious they had no clue.

Edit: this isn't necessarily about Go and I didn't make that very clear. See later posts for clarification.
 
Last edited:

lefenzy

Senior member
Nov 30, 2004
231
4
81
Games that require statistical calculations are easily winnable by a computer. They can produce the entire solution set while a human has to compute a limited set of vectors based on experience and creativity. The computer simply produces the entire solution space and picks the route with the highest probability to win. There's nothing scary about that and it's certainly not evidence of a thinking machine.

As much as I wish we were close to 'real' AI, we aren't. So far, all we've done is make really fast look-up tables with some predictive behaviors that are based primarily on pattern-matching. No one has reproduced or even described the essence of what it means to 'think' like a human and, once that has been done, translating it into a machine will not be an easy task. I went to an AI conference last year and a few people tried to stand up and say we were close with absolutely retarded examples. They didn't even bother with a Q&A because it was obvious they had no clue.

Huh. From what I've read about Go, it is a game where it's not possible for computers to calculate the likely outcomes for all possible moves, hence the achievement of a computer capable of playing the game.
 

Pulsar

Diamond Member
Mar 3, 2003
5,224
306
126

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
That is correct. While chess can be computed using the method he suggested, Go cannot.

https://en.wikipedia.org/wiki/Game_complexity

https://www.tastehit.com/blog/google-deepmind-alphago-how-it-works/

That is incorrect in regard to what's important. Computers do not think yet and there is no other method through which a computer can play a game. The solution space may change at each move, which precludes it from being calculated in its entirety at each move, but it can recalculate immediately and project an entirely different solution space with the available constraints. That's the only way AI thinks right now. If the game can't support that method of playing, it simply means the computer has a chance to lose, but that's a hard limit on what is currently happening. That's all a human is really doing as well when you really get down to the root of what a Go decision looks like.

Here's the algorithm it uses as found on Wikipedia:
As of 2016, AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. It uses Monte Carlo tree search, guided by a "value network" and a "policy network," both implemented using deep neural network technology.[2][9] A limited amount of game-specific feature detection pre-processing (for example, to highlight whether a move matches a nakade pattern) is applied to the input before it is sent to the neural networks.[9]

The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves.[16] Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play.[2] To avoid "disrespectfully" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the March 2016 match against Lee, the resignation threshold was set to 20%.[49]

The key points:
  1. Machine learning - This is not a new thing and it's certainly not indicative of thinking like a human. This is effectively predictive pattern-matching, which means take a crapload of data, find a pattern, and guess what will happen next. The solution space for a question that is being answered through machine learning is finite.
  2. Tree searching - Definitely not thinking. The entire tree can be computed up front and then searched for optimum answers depending on the question that's being asked.
  3. Extensive training - Pattern matching again. Watch what people do and then generate the most likely probability of the next move during a game based on all of that data. Again, the entire solution space is generated and then searched.
  4. Monte Carlo [tree search] - Monte Carlo anything means generating almost all or all of the solution space before searching through it for patterns.
  5. Value and Policy networks - Definitely not thinking. Assign values to particular moves or abide by policies to prefer or reject certain moves. This is implemented after the solution space is available because value and policy networks help traverse the most meaningful and/or statistically better moves.
The list can go on and on. No computers are thinking yet. It's absolutely a necessary step in the development of 'real' AI, but an AI winning a Go match isn't a scary event - yet.

From the tastehit article:
How important are these results?
Superficially, both Go and chess seem to be representative of typical challenges faced by AI: The decision-making task is challenging, and the search space is intractable.

In chess, it was possible to beat the best human players with a relatively straightforward solution: brute force search, plus some heuristics hand-crafted (with great effort) by expert chess players. The fact that heuristics had to be hand-crafted is rather disappointing, because it does not immediately lead to other breakthroughs in AI: each new problem would need new hand-crafted rules. Also, it seems that it was a lucky coincidence that chess had a state space that was very large, but still small enough to be just barely tractable.

That's exactly the type of iterative AI development I would expect to see. Chess is easier to win for an AI especially now that computing resources are more plentiful while Go is harder. However, the underlying mechanisms by which Go is played by an AI haven't changed; what has changed is how the aforementioned mechanisms are deployed. In 15 or 20 years, I'm assuming we'll be reading similar articles about AI that uses AlphaGo as the predecessor the way Deep Blue is currently being used.

A good summary of opinions from experts and a few examples:

"Right now, all of the impressive progress we've made is mostly due to supervised learning, where we take advantage of large quantities of data that have already been annotated by humans.

"This supervised learning thing is not how humans learn.

"Before two years of age, a child understands the visual world through experiencing it, moving their head and looking around.

"There's no teacher that tells the child, 'in the image that's currently in your retina, there's a cat, and furthermore it's at this location' and for each pixel of the image say 'this is background and this is cat.' Humans are able to learn just by observation and experience with the world.

"In comparison to human learning, AI researchers are not doing that great."

"The short answer is: we have no idea. That's why it's very difficult to make predictions as to when 'human-level AI' will come about.

"Right now, though, the main obstacle we face is how to get machines to learn in an unsupervised manner, like babies and animals do."

"Some of the great successes lately have been things like deep learning — methods that take lots and lots of data and then are able to mimic human judgments about that data. Things like object recognition — where you show the computer a picture and it can label that picture, 'oh that's a woman by the beach,' that kind of thing.

"What's missing from a lot of these systems at the moment is a notion of a will — a desire to do something in the world. It's just doing what it's told, which is to map inputs to outputs.

"There's not a lot of room for creativity there. The kinds of problems we are asking these systems to do are not really on a path towards a sentient system."
 
Last edited:

bigi

Platinum Member
Aug 8, 2001
2,490
156
106
Chess, Go, traveling salesman with many points, can't bee fully calculated by computers yet.

It needs AI to win.

There more combinations in chess than atoms in visible universe by several orders of magnitude.

In 10-20 years - looks like it, but not yet.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Chess, Go, traveling salesman with many points, can't bee fully calculated by computers yet.

It needs AI to win.

There more combinations in chess than atoms in visible universe by several orders of magnitude.

In 10-20 years - looks like it, but not yet.

Are you just repeating what was said in the article? AI as we currently know it is basically a statistical spreadsheet and nothing more. That's the point and why it's still able to be beaten.

My initial post wasn't comprehensive and wasn't necessarily about Go even though that wasn't clear. Moves in Go, Chess, etc. may not be able to be fully resolved, but the process is still the same. The solution space is computed as fast as possible based on constraints that are used to focus the time and effort of the calculations. The only difference is the whole space can't be computed in some situations, which requires a little more smarts up front to limit the scope. None of this is the same as human thought, at least not the way we all want it to be. At some level, there probably are a lot of similarities, but it's still not the same... yet.
 

bigi

Platinum Member
Aug 8, 2001
2,490
156
106
What article?

No, this is what I learned in college 20 years ago.
 

Pulsar

Diamond Member
Mar 3, 2003
5,224
306
126
Are you just repeating what was said in the article? AI as we currently know it is basically a statistical spreadsheet and nothing more. That's the point and why it's still able to be beaten.

My initial post wasn't comprehensive and wasn't necessarily about Go even though that wasn't clear. Moves in Go, Chess, etc. may not be able to be fully resolved, but the process is still the same. The solution space is computed as fast as possible based on constraints that are used to focus the time and effort of the calculations. The only difference is the whole space can't be computed in some situations, which requires a little more smarts up front to limit the scope. None of this is the same as human thought, at least not the way we all want it to be. At some level, there probably are a lot of similarities, but it's still not the same... yet.

The 'advances' in this technology are in teaching it to learn by itself - weighting certain factors and letting it play itself to generate strategy, rather than having millions of human records to teach it. That's important because self-taught systems are exactly what we need for many applications, from industrial manufacturing to autonomous driving. Right now we have to teach the robots how to move. If they can do it themselves, it removes an order of complexity from automation.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
The 'advances' in this technology are in teaching it to learn by itself - weighting certain factors and letting it play itself to generate strategy, rather than having millions of human records to teach it. That's important because self-taught systems are exactly what we need for many applications, from industrial manufacturing to autonomous driving. Right now we have to teach the robots how to move. If they can do it themselves, it removes an order of complexity from automation.

Training data is still crucial, but I agree that's the next best goal and that seems to be the way the industry is moving. I'm looking forward to when these types of algorithms are truly unbeatable by a human because that will seem like a distinct step forward.
 

agent00f

Lifer
Jun 9, 2016
12,203
1,243
86
That is incorrect in regard to what's important. Computers do not think yet and there is no other method through which a computer can play a game. The solution space may change at each move, which precludes it from being calculated in its entirety at each move, but it can recalculate immediately and project an entirely different solution space with the available constraints. That's the only way AI thinks right now. If the game can't support that method of playing, it simply means the computer has a chance to lose, but that's a hard limit on what is currently happening. That's all a human is really doing as well when you really get down to the root of what a Go decision looks like.

Here's the algorithm it uses as found on Wikipedia:


The key points:
  1. Machine learning - This is not a new thing and it's certainly not indicative of thinking like a human. This is effectively predictive pattern-matching, which means take a crapload of data, find a pattern, and guess what will happen next. The solution space for a question that is being answered through machine learning is finite.
  2. Tree searching - Definitely not thinking. The entire tree can be computed up front and then searched for optimum answers depending on the question that's being asked.
  3. Extensive training - Pattern matching again. Watch what people do and then generate the most likely probability of the next move during a game based on all of that data. Again, the entire solution space is generated and then searched.
  4. Monte Carlo [tree search] - Monte Carlo anything means generating almost all or all of the solution space before searching through it for patterns.
  5. Value and Policy networks - Definitely not thinking. Assign values to particular moves or abide by policies to prefer or reject certain moves. This is implemented after the solution space is available because value and policy networks help traverse the most meaningful and/or statistically better moves.
The list can go on and on. No computers are thinking yet. It's absolutely a necessary step in the development of 'real' AI, but an AI winning a Go match isn't a scary event - yet.

From the tastehit article:


That's exactly the type of iterative AI development I would expect to see. Chess is easier to win for an AI especially now that computing resources are more plentiful while Go is harder. However, the underlying mechanisms by which Go is played by an AI haven't changed; what has changed is how the aforementioned mechanisms are deployed. In 15 or 20 years, I'm assuming we'll be reading similar articles about AI that uses AlphaGo as the predecessor the way Deep Blue is currently being used.

A good summary of opinions from experts and a few examples:

"Thinking" is in all likelihood an emergent property of learning enough patterns, and learning what's usefully novel, which is exactly what these ML/"AI" algs do. They certainly "think" about chess/go much as humans seem do, but much more precisely which is why they win. I mean, neural nets are literally modeled on physical attributes of the brain, which humans certainly agree perform thinking. The recent acceleration in the field is mainly due to availability of much larger data sets.

Now, there are limitations to these techniques, but they're not due to the reasons you seem to believe but rather factors such as physical limits to photolithography and such.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
I seriously given your posts above you ever attended an ML conference.

There are words missing from that post, so I'm not completely sure what you meant, but I've been to many including the afternoon part of mlconf in NYC in March. Oh, I also have a masters in computational analytics and machine learning. Thanks though.
 

agent00f

Lifer
Jun 9, 2016
12,203
1,243
86
There are words missing from that post, so I'm not completely sure what you meant, but I've been to many including the afternoon part of mlconf in NYC in March. Oh, I also have a masters in computational analytics and machine learning. Thanks though.

So you're sure neural nets they use for decisions don't work more or less like neurons do, despite deep image nets predisposed to edge detection & such working pretty much as we know vision does. Must be some magic substance between the ears nobody's seen yet.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
So you're sure neural nets they use for decisions don't work more or less like neurons do, despite deep image nets predisposed to edge detection & such working pretty much as we know vision does. Must be some magic substance between the ears nobody's seen yet.

Just because a neural net appears to do something like a brain doesn't mean it's the same thing or even close to the same thing. The magical substance between our ears is so poorly understood that I can't take your question seriously. Structural and compositional understanding of a synapse isn't even close to complete. For one out of a huge number of examples, find a definitive article about how memory works and how we can replicate it. I'd be happy to be wrong about this because it would be a huge step forward, but, last time I checked, the best anyone had accomplished was a partial understanding of relatively small pieces of memory storage and recall. Decision making is even harder and even less understood. Trying to emulate something you don't understand is basically a divide by zero error.

One of my closest friends has a PhD in computer vision and he works specifically on the type of systems you described. His exact comment on this topic is "nothing we do is even close to modeling a real brain." He's developing LIDAR scanner processing algorithms for autonomous cars.

Here's a post on Quora from Yohan John, a PhD from BU, who definitely disagrees with your perspective. I heard him give a similar presentation a few years back with far more detail, but the bulk of the material is summarized in that post. "Our best models might be as different from real brains as cars are from horses!"

The fact that you're so confident is more telling than anything else. No one is that sure of anything in this field.
 

agent00f

Lifer
Jun 9, 2016
12,203
1,243
86
Just because a neural net appears to do something like a brain doesn't mean it's the same thing or even close to the same thing. The magical substance between our ears is so poorly understood that I can't take your question seriously. Structural and compositional understanding of a synapse isn't even close to complete. For one out of a huge number of examples, find a definitive article about how memory works and how we can replicate it. I'd be happy to be wrong about this because it would be a huge step forward, but, last time I checked, the best anyone had done accomplished was a partial understanding of relatively small pieces of memory storage and recall. Decision making is even harder and even less understood. Trying to emulate something you don't understand is basically a divide by zero error.

One of my closest friends has a PhD in computer vision and he works specifically on the type of systems you described. His exact comment on this topic is "nothing we do is even close to modeling a real brain." He's developing LIDAR scanner processing algorithms for autonomous cars.

Here's a post on Quora from Yohan John, a PhD from BU, who definitely disagrees with your perspective. I heard him give a similar presentation a few years back with far more detail, but the bulk of the material is summarized in that post. "Our best models might be as different from real brains as cars are from horses!"

The fact that you're so confident is more telling than anything else. No one is that sure of anything in this field.

I think you understand it's a rather different question to ask whether neural algs exactly replicate neurons or whether they perform the same functions more or less similarly. For example, it's a simple fact that quantities can be stored & used in both, even if how those quantities are precisely encoded in the human mind isn't clear. ML image classification literally makes decisions based on sub-attributes of layered input data much like the visual system gradually collates inputs, same as these go engines classifies relations/patterns just as humans recognize/abstract them.

This isn't just flying by spinning rotors, but flapping wings activated by cords. It's might not precisely emulate bird skeletal, tendon & feather systems, but it's the same principles. I guess to some people nothing short of growing a bird from DNA can be classified as "flying". Your contention is basically the human mind has some magical component, and that without this magic "thought" cannot be possible. Yet these systems are already making decisions just as well or better than humans can by literally "learning" in much the same way. Pretty soon they'll be replacing simple desk jobs, and you'll continue to insist they don't do Real thinking, as if the people laid off aren't real thinkers.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
I think you understand it's a rather different question to ask whether neural algs exactly replicate neurons or whether they perform the same functions more or less similarly. For example, it's a simple fact that quantities can be stored & used in both, even if how those quantities are precisely encoded in the human mind isn't clear. ML image classification literally makes decisions based on sub-attributes of input data much like the visual system gradually collates layered inputs, same as these go engines classifies relations/patterns just as humans recognize/abstract them. This isn't just flying by spinning rotors, but flapping wings activated by cords. It's might not precisely emulate bird skeletal, tendon & feather system, but it's the same principles. I guess to some people nothing short of growing a bird from DNA can be classified as "flying". Your contention is the human mind has some magical component, and that without this magic "thought" cannot be possible. Yet these systems are already making decisions just as well or better than humans can by literally "learning" in much the same way. Pretty soon they'll be replace simple desk jobs, and you'll continue to insist they don't do Real thinking, as if the people laid off aren't real thinkers.

Until the brain is understood, your comments are based on basically nothing. The best we can do is approximate behaviors, which isn't even close to the same.

It's ironic that the person who started with the "you don't know anything" stuff seems to know very little. Machines aren't thinking according to most people in this field and I don't know need to know what the threshold is to make that statement. Replacing a simple desk job is proof of exactly nothing because that's already happening. I guess some people think hitting a button really fast means a machine is thinking like a person who hits the button slower. The complexity of a job says literally nothing of value about the potential of a brain.
 

agent00f

Lifer
Jun 9, 2016
12,203
1,243
86
Until the brain is understood, your comments are based on basically nothing. The best we can do is approximate behaviors, which isn't even close to the same.

It's ironic that the person who started with the "you don't know anything" stuff seems to know very little. Machines aren't thinking according to most people in this field and I don't know need to know what the threshold is to make that statement.

It's just an impeachable fact that current classifiers can already outperform humans, and outperforming animals with smaller brains on many tasks is almost a given. So unless you're a religious man convince of human exceptionalism, there's zero justification for this pride literally based on current ignorance of how our neurons are precisely tuned.

Replacing a simple desk job is proof of exactly nothing because that's already happening. I guess some people think hitting a button really fast means a machine is thinking like a person who hits the button slower. The complexity of a job says literally nothing of value about the potential of a brain.

Your only saving grace here is claiming that the algs don't think *exactly* like humans, regardless of performance on a multitude of tasks thrown at it, which only has worth if humans are linguistically defined to hold unique domain over "thought". I can see some alg soon which can make shitty arguments better than you, and you insisting that it's not "thinking" due to somewhat dissimilar neural makeup; but then what does that say about what your brain does?
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
It's just an impeachable fact that current classifiers can already outperform humans, and outperforming lesser animals on many tasks is almost a given. So unless you're a religious man convince of human exceptionalism, there's zero justification for this pride literally based on current ignorance of how our neurons are precisely tuned.

Your only saving grace here is claiming that the algs don't think *exactly* like humans, regardless of performance on a multitude of tasks thrown at it, which only has worth if humans are linguistically defined to hold unique domain over "thought". I can see some alg soon which can make shitty arguments better than you, and you insisting insisting that it's not "thinking" due to somewhat dissimilar neural makeup; but then what does that say about what your brain does?

Good luck with whatever it is you do. You clearly need it.
 

agent00f

Lifer
Jun 9, 2016
12,203
1,243
86
Good luck with whatever it is you do. You clearly need it.

Evidently that day has already come:

I can see some alg soon which can make shitty arguments better than you, and you insisting that it's not "thinking" due to somewhat dissimilar neural makeup; but then what does that say about what your brain does?
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
The 'advances' in this technology are in teaching it to learn by itself - weighting certain factors and letting it play itself to generate strategy, rather than having millions of human records to teach it. That's important because self-taught systems are exactly what we need for many applications, from industrial manufacturing to autonomous driving. Right now we have to teach the robots how to move. If they can do it themselves, it removes an order of complexity from automation.

This is exactly what they Did with AlphaGo Zero and the more generalized Alpha Zero. Less than a year after the original post here.

They fed in the rules of the games, and the AI learned to by playing games against itself. No more human training.
https://www.theverge.com/2017/12/6/16741106/deepmind-ai-chess-alphazero-shogi-go

One of the key advances here is that the new AI program, named AlphaZero, wasn’t specifically designed to play any of these games. In each case, it was given some basic rules (like how knights move in chess, and so on) but was programmed with no other strategies or tactics. It simply got better by playing itself over and over again at an accelerated pace — a method of training AI known as “reinforcement learning.”

Even more intriguing is HOW it plays chess:
https://www.technologyreview.com/s/...ss-shows-the-power-and-the-peculiarity-of-ai/

What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory. Such sacrifices of high-value pieces are normally rare. In another case the program moved its queen to the corner of the board, a very bizarre trick with a surprising positional value. “It’s like chess from another dimension,” Hassabis said.

Unlike past chess programs that were trained with all the opening moves, and fed human games, this one is completely freed of human constraints and biases.

It learned how to play by itself, and it's style is unique.

This could be another huge value of AI. Breaking out of our intellectual blind spots.

Alpha Zero is a massive advance IMO.
 
Last edited:

BarkingGhostar

Diamond Member
Nov 20, 2009
8,410
1,617
136
This could be another huge value of AI.
Right, because there are times when I want an AI to be uniquely aware that it is different than me in order to generate a value upon it and me and thence ...
Joking aside, how does one build a trust system into whatever AI is working on for someone/something other than itself? If it tells me to take the red pill, do I take the blue?