## Some MisereSolver questions answered

Mike Weimerskirch asked some questions about MisereSolver. They’re answered inline by Aaron Siegel:

* * *

In what sense do the candidate quotients ‘converge’ to the correct quotient? I’ve read that “MisereSolver sometimes chooses very poor candidates; often they are considerably larger than the true quotient.” Do you have a quick example of when this happens?

Q_30(0.07) has size 360 but one of the candidates has size 848:

java -jar misere.jar 0.07 -maxheap 30

The candidates converge in the sense that their lexicographically least failures are strictly lexicographically increasing. Therefore strictly more of the lexicographic space is correct for each successive candidate.

On the other hand, MisereSolver doesn’t have the problem that my algorithm has, that being to produce a candidate quotient that is correct in terms of pretending function and outcome partition, but isn’t reduced in that it assigns multiple monoid elements to the same indistinguishability class. So MisereSolver doesn’t give candidates that are too large in that sense.

Note, it’s extremely easy to pass from an unreduced bipartite monoid to a reduced one; just quotient out by the indistinguishability relation

x \rho y iff P(xz) = P(yz) for all z \in Q

So if you have an unreduced bipartite monoid that is correct for a given game, you can find the misere quotient by modding out by this relation. Indeed, an early version of miseresolver worked by finding a large unreduced quotient and then passing from there to the reduced quotient. The problem was that the unreduced quotients rapidly became too large to be useful. That is why I switched to the current iterative algorithm; essentially it performs the reduction repeatedly at each stage of the iteration.

Also, how does MisereSolver know that the lexicographically least failure occurs for game G when the number of games preceding G in the lex order is infinite?

It uses the verification algorithms described in our paper (Misere quotients for impartial games, supplementary material, http://www.arxiv.org/abs/0705.2404 ) The basic idea is to prove that the lex-least failure must satisfy certain constraints, and the constraints are sufficiently strong that they are satisfied by only a finite number of words in the heap alphabet. (That is, we can prove that if a failure occurs for a word that is not among the finitely many critical cases, then it must also occur for one of the critical words.) Therefore it suffices to verify just finitely many critical cases in ascending order.

Hope this helps,

Aaron