We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.
Hmm, maybe I wasn't clear enough about the differences in gamestate. Normally, a tree search will have a single unique gamestate on a single node. If you only consider depth 1, it means that if you select "angle 0, thrust 200" and the enemy pod does as well, you will get a certain gamestate as a result. But the next time you select angle 0, thrust 200 on depth 1, the enemy might choose to take a 18 degree turn. The difference in gamestate will not be very big, because it is only one turn. If you compare possible gamestates for a depth 8 node, the differences will be much larger, because the enemy may have picked a completely different path, putting him on the other side of the map.
You can indeed use heuristics and I do. I use heuristics to decide which node children (corresponding to moves) I am going to create. This only depends on my own actions. For example, if i shield on turn 1, i will not create children with thrust from that node, or any of the next 3 depth levels, but you cannot create children depending on enemy actions, since you have no idea what the enemy is going to be doing in that turn. The first time a set of children is created, you have no statistics and even if you do, they will not be reliable at all. You'd be pruning possible moves without basis. This is the limitation i was referring to.
However, you're completely correct that you could react to the enemy, as long as you have a complete set of possible reactions. You must also have an option for "no enemy near"(like your WAIT option), because in some iterations there may not be any. I am not certain how much this added flexibility will mess with convergence, but it may be fine. It will be fun to see how things work out, for example in Mean max where this sort of heuristic may be needed. I think you also have to consider whether these kinds of heuristics are features of the simulation or features of the search. If you're always going to shoot the nearest enemy, then it is a feature of the sim. If on the other hand you have "passive" nodes and "aggressive" nodes then indeed, it is a search heuristic. I did not think of this type of node. It's nice to see this search algorithm may be more widely applicable than I thought.
I understand now. You are talking about heuristics for children selection while I was refering to heuristic actions.
And this can be problematic, yes. You can't use Deep Learning or Neural Networks to select a more promising child because you don't really have a Game State anywhere, nor the moves of other pods.
Hello, may i ask what dou you mean by nodes in your case? How to select node parameters? How many nodes do you typically have? Can ypu tell more about node.MakeChildren() function?
You contradict a bit on the last paragraph: "It is hard to use heuristics [...]because[...] things will be different on the next iteration". Hmm, just like any normal action, right? You stated above "However, the next time the same node is selected by a pod, the other pods may select different nodes, leading to a different gamestate. The differences in possible gamestates will be larger for greater depth levels.". It's contradictory.
I can't see why heuristics can't be modelled as another action, and heuristics will be in the pod.ApplyMove() you have.
You can have a high level action like "shoot to the nearest enemy position + its velocity" or "CP -3 velocity", and it will behave differently according to the gamestate. It will have different scoring and in the end it will be a good action or not, just like a simple "LEFT" action.
But both actions, the heuristic one and the simple action can have different outcomes according to the gamestate it's being played. So I can't see the limitation there.
Obviously you'll have gamestates where you end with an "invalid" heuristic move, as when you haven't any blocker near, but then it can be failover to a simple default action "WAIT". The very nature of MCTS should reduce the ocurrence of this action if it's bad.