What is Tantrix?
The Story so far
This Month's Ranking
|HOW DO THE ROBOTS WORK?|
| The Basics
Almost any game-playing Robot is actually just three relatively small modules which are added to the usual game machinery.
| Robot's Move
There's really not much to say - it's a brute force process which tries all six tiles in all possible positions, and remembers which ones are legal moves. In a typical game situation, there are about 50-75 moves. In the endgame, there can be many more, possibly as many as 200.
The basic evaluation is, for each line of the Robot's colour, to predict what it will eventually be worth, and so for any position to predict what the final score for the game will be. This trick, of trying to predict the final score rather than using the current score is why the Robot is so adept at handling loops.
So, how does the evaluator estimate the final score?
Robot's Traditional Lookahead Manager
Tantrix is unusual because players can make several moves per turn. The Robot considers all forced move sequences in any order, and all forced moves the opponent will have to make before his free moves. Consequently, it has a pretty good idea of how to play in ways that will force other tiles to be played in favourable ways. It does NOT consider what the opponent's free move might be in response, and It does NOT consider what tiles might be drawn to replace the tiles it has played. Also, an absolute limit of 5 (or so) tiles is imposed to keep the number of moves examined down to a reasonable number. The lookahead manager does not attempt to use alpha/beta or any other standard tree pruning methods, because they would be of very little use given the limited depth of the lookahead.
The lookahead manager used by Goodbot is slightly more sophisticated. It removes the low limit of 5 forced tiles, and adds alpha-beta to get the speed back up (alpha-beta doesn't change the result) It adds the option to look a little deeper during endgame, and to manage it's time to avoid runaway searches when a cascade of forced tiles occurs.
Monte Carlo Lookahead
A radically different approach to building a robot lookahead is becoming popular and is having some success in computer Go, which is much harder for robots than chess. Now it is showing to be quite good in Tantrix as well. Monte is a robot without a brain. It has no evaluator in the traditional sense, it relies on playing games all the way to the end and finding out the actual score. It does this very fast, and many times, to get an average prediction of which moves work best. The art in this "Monte Carlo" approach is to decide which move to spend time investigating. Monte was programmed by Pete Bruns, based on the code from Oliver, but using his own evaluator. This paper descibes his work.
A word from the Author
There are quite a few tunable parameters associated with all these considerations, but actually tuning them is very difficult, because the random factor means a "run" of 200 games is required to validate a new parameter value or a change to the algorithm that applies the parameter. The current state of GoodBot is probably the end of the line from me - I don't know how to make it any smarter, and making it better without being fundamentally smarter would be extremely time consuming.
Both Robot and Goodbot were written by Dave Dyer (ddyer).
2005 Robot improvements
At the end of January 2005, the first version of GoodBot was released into the lobby. This version of GoodBot was a harder-working, gloves-off version of the original Robot, but without any fundamental changes. Advances in computer speeds since the original Robot was created made this development possible. Because of its more "human" playing ability, GoodBot had a floating ranking in the beginning.
In May 2005, a new robot named Oliver was introduced into the lobby. The first official robot battle - Goodbot vs. Oliver - was held on the 11th of that month. Oliver, programmed by Pieter Bolle, Belgium and Alex Biryukov, Israel preformed slightly better than expected by defeating the reigning champion Goodbot with a winning percentage of 64.5% to 30.5%.
Inspired and challenged by Oliver's success, new developments in Goodbot resulted in substantial improvements. A rematch battle between Goodbot and Oliver resulted in Goodbot regaining his title of champion bot with a win of 54.7% to 45.3% (TPs).
2011 Robot improvements
In October 2011, FullMonte, the Monte Carlo robot was unleashed. Instead of trying to look ahead, FullMonte uses raw speed to finish as many games as possible from a given position and then figures out which move resulted in the most wins.
FullMonte challenged Goodbot and won easily - 63% to Goodbot's 32.2% (4.8% of the games ended in draws). The challenge encompassed 200 games and took just under 36 hours.
Along with FullMonte another robot was released at the same time - Monte. Monte can be considered FullMonte's slightly dimmer brother. Oddly enough this is because he is quicker. The only difference between FullMonte and Monte is that FullMonte is set to take about 30 seconds a move on average and Monte only takes around 15 seconds. Both bots are written so that moves may take a bit longer at the start of a match (when there are more potential moves to consider) and a bit shorter at the end of a match.
In 2016 the most advanced algorithm yet was announced and added to the Robot line-up. Called Darwin and designed by Simon Buhr from Germany, it had no trouble outclassing all comers, including beating Fullmonte in a head to head competition. Darwin won 70.8%, lost 24.8%, with 4.5% of the games drawn.
According to Simon "I would not say that Darwin is near perfect. He is very good at the mid game, but we all see some of his failures especially during the end game or at the beginning (remember game 4 against Niklas). So there is of course room for something better. Dave mentioned the next revolution is maybe up to neural networks and true machine learning like the algorithm behind alphaGo. But I agree that currently Darwin is well suited to produce some realistic game outputs.
The codebase is available as a standard "robot kit". There is still a huge amount of unexplored space. The challenge is all yours!
© Copyright 2020, Colour of Strategy Ltd. Pohara, New Zealand. All rights reserved.
Last update: November, 2020