[English]

In the history of infectious diseases, a single drug has been easily resisted by mutations in the pathogen. In contrast, when multiple drugs are used simultaneously, each helps prevent the emergence of resistance to the others. Ryfamate derives from Rifamate which is the combination of two medicines for tuberculosis to retard the development of drug resistance. As with this method, Ryfamate aims to combine an NNUE alpha-beta search with deep learning (DL) Monte Carlo tree search to compensate for each other's shortcomings.

Ryfamate employs a modified majority voting system integrating a deep-learning-based evaluation engine and an NNUE-based evaluation engine [1]. By combining a deep-learning engine optimized primarily for GPU computation and an NNUE engine designed for CPU computation, Ryfamate effectively leverages the complementary strengths of both approaches. This hybrid configuration enables efficient utilization of computing resources within a single PC, significantly enhancing overall performance.

The author also proposes the Ryfamate Cross Network (RyfcNet) [2], an advanced deep-learning architecture specifically designed to enhance Shogi evaluation functions by effectively integrating the following distinctive components:

  1. S-Layer: Conventional convolutions employing standard square-shaped kernels.
  2. C-Layer: Extended convolutions utilizing kernels that span the full length of the input space across two or more dimensions, enabling selective dimensional shifts.
  3. A-Layer / F-Layer: Spatial self-attention layer / channel-wise fully connected layers.

Standard deep-learning-based Shogi evaluation models [3] typically experience substantial computational challenges during training, particularly as the depth of the network increases. To address these limitations, RyfcNet introduces several strategic innovations:

  1. Integration of multiple activation functions coupled with modified skip-connection architectures to alleviate gradient degradation and enhance feature propagation.
  2. Implementation of ensemble learning and knowledge distillation techniques, leveraging multiple types of evaluation functions as teacher models to enrich learned representations.
  3. Adoption of a hybrid optimization approach that selectively applies RAdam or LAMB optimizers alongside Stochastic Gradient Descent (SGD) at the layer level, significantly improving training efficiency and stability.

In recent years, large-scale neural models, exemplified by large language models (LLMs), have attracted considerable attention due to their exceptional representational power. Analogously, the innovations embodied within RyfcNet promise efficient scaling of Shogi evaluation models to accommodate greater parameter complexity, thereby improving predictive accuracy across diverse gameplay scenarios.

[Library]

YaneuraOu : https://github.com/yaneurao/YaneuraOu

dlshogi : https://github.com/TadaoYamaoka/DeepLearningShogi

* Ryfamate also uses a large amount of data that Kano-san, Yamaoka-san, Tayayan-san, nodchip-san have published.

[Reference]

[1] 水無瀬香澄, “Ryfamate 開発記と変則多数決合議,” コンピュータ将棋協会誌 Vol.36, 2025.

[2] Komafont, “Ryfamate Cross Network,” 第 33 回世界コンピュータ将棋選手権, 2023.

[3] 山岡忠夫, 加納邦彦, “強い将棋ソフトの創りかた Pythonで実装するディープラーニング将棋AI,” マイナビ出版, 2021.