Search-based algorithms continue to dominate traditional board games. Stockfish, AlphaGo and friends are hybrid systems combining a game-tree search algorithm with a neural net. The neural net is trained (by self-play) to learn an evaluation function for the game-tree search algorithm (more precisely, the neural nets learn a classifier for board positions as leading to a win, loss or draw). The game-tree search algorithms are alpha-beta minimax in Stockfish and Monte Carlo Tree Search in AlphaGo and family. Far as I know anyway.
DeepMind have downplayed the use of MCTS in their Alpha-x family, to the point of obfuscating the fact that is part of their system at all and have sowed much confusion about this, but their systems ain't going nowhere without good, old-fashion game-tree search.
DeepMind have downplayed the use of MCTS in their Alpha-x family, to the point of obfuscating the fact that is part of their system at all and have sowed much confusion about this, but their systems ain't going nowhere without good, old-fashion game-tree search.
Stockfish only recently adopted neural nets, btw.