
Credit: John Ulan
The University of Alberta’s Computer Poker Research Group created DeepStack, an artificial intelligence program that defeated professional human poker players at heads-up, no-limit Texas hold ‘em.
Apart from this win being the first of its kind, it bares significance in assisting to make better medical treatment recommendations to developing improved strategic defense planning, stated DeepStack: Expert-level artificial intelligence in heads-up no-limit poker, which was published in Science.
DeepStack brings together approaches involving games of perfect information, meaning both players see what is on the board, and imperfect information, where it reasons while playing, using intuition through learning to reassess its strategy with every decision.
Computing scientist Michael Bowling, professor in the University of Alberta’s Faculty of Science and principal investigator on the study, said poker has presented an ongoing challenge to artificial intelligence.
“It is the quintessential game of imperfect information in the sense that the players don’t have the same information or share the same perspective while they’re playing,” explained Bowling.
Parlour games have proved useful to researchers in this field over the years, due to the fact they use mathematics and provide insight on the interactions between decision-makers.
In a similar case from May 11, 1997, Deep Blue, an IBM computer, outsmarted the world chess champion after six games –the computer had two wins, the champion won a single match, and there were three draws.
According to IBM’s website, the competition lasted several days, gaining much media attention, and just like with DeepStack, “was important computer science, pushing forward the ability of computers to handle the kinds of complex calculations needed to help discover new medical drugs; do the broad financial modeling needed to identify trends and do risk analysis; handle large database searches; and perform massive calculations needed in many fields of science.”
Bowling added they train their system to learn the value of situations, utilizing a technique called continual re-solving.
“Each situation itself is a mini poker game,” he said. “Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game.”
Regardless of the game’s difficulty level, DeepStack reacts at an average of three seconds of “thinking” time, and runs on a gaming laptop.
The AI program was pitted against, “a pool of professional poker players recruited by the International Federation of Poker. Thirty-three players from 17 countries were recruited. Each was asked to complete a 3,000 game match over a period of four weeks between Nov. 7 and December 12, 2016,” said the article.
The study involved 44,000 hands of poker, and saw DeepStack triumphing over the 11 players that finished the match, with one outside the margin of statistical significance.
There is no doubt artificial intelligence research benefits many areas of our world, especially when scientists go “all-in” to delve deeper into what else there is to learn from the at times victorious machine counterparts.
Leave a Reply
You must be logged in to post a comment.
Comment