abstract
large_string
keywords
large_string
huggingface
large_string
github
large_string
url
large_string
booktitle
large_string
year
large_string
author
large_string
title
large_string
ENTRYTYPE
large_string
ID
large_string
type
large_string
supervisor
large_string
pdf
large_string
doi
large_string
pages
large_string
number
large_string
volume
large_string
journal
large_string
month
large_string
note
large_string
editor
large_string
website
large_string
series
large_string
publisher
large_string
numpages
large_string
articleno
large_string
issue_date
large_string
address
large_string
eprint
large_string
eprinttype
large_string
issn
large_string
school
large_string
isbn
large_string
location
large_string
tldr
large_string
bot
large_string
slides
large_string
poster
large_string
model
large_string
blog
large_string
day
large_string
language
large_string
dataset
large_string
institution
large_string
primaryclass
large_string
archiveprefix
large_string
eissn
large_string
place
large_string
howpublished
large_string
video
large_string
organization
large_string
talk
large_string
keywors
large_string
article-number
large_string
urldate
large_string
data
large_string
langid
large_string
pagetotal
large_string
titleaddon
large_string
preprint
large_string
repository
large_string
software
large_string
figshare
large_string
laysummary
large_string
annote
large_string
appendix
large_string
pypi
large_string
code
large_string
study
large_string
In this study, we present an innovative fusion of language models and query analysis techniques to unlock cognition in artificial intelligence. The introduced open-source AI system seamlessly integrates a Chess engine with a language model, enabling it to predict moves and provide strategic explanations. Leveraging a vector database to achieve retrievable answer generation, our AI system elucidates its decision-making process, bridging the gap between machine's computational cognition and human-like understanding. Our choice of Chess as the demonstration environment underscores the versatility of our approach. Beyond Chess, our system holds promise for diverse applications, from medical diagnostics to financial forecasting. Our AI system is available at https://github.com/TheOpenSI/CoSMIC
AI cognition, Chess, large language models, query analysis, retrievable answer generation
https://huggingface.co/OpenSI/cognitive_AI_chess
https://github.com/TheOpenSI/CoSMIC
https://aisel.aisnet.org/acis2024/31
Australasian Conference on Information Systems, {ACIS} 2024, Canberra, Australia, December 4-6, 2024
2024
Muntasir Adnan and Buddhi Gamage and Zhiwei Xu and Damith Chandana Herath and Carlos C. N. Kuhn
Unleashing Artificial Cognition: Integrating Multiple {AI} Systems
inproceedings
adnan:2024:unleashing-artificial-cognition-integrating-multiple-ai-systems
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Chess has long been a benchmark for artificial intelligence (AI) research due to its complexity and well-defined rules. Recent advances, such as AlphaZero, introduced self-learning AI through reinforcement learning and self-play, achieving superhuman performance without prior strategic knowledge, relying solely on the rules of the game. AlphaZero defeated the world-champion chess engine Stockfish after only four hours of training, leveraging large-scale computational resources to rapidly learn and refine its strategies. This thesis presents the development of a chess engine for the chess variant Atomic Chess. The engine was developed in C++ and trained through self-play and reinforcement learning, taking inspiration from AlphaZero's approach. This project explores the extent to which a chess engine with this approach is feasible for the average enthusiast. Cost-effective cloud-based virtual machine instances with powerful hardware were rented to manage training workloads. Given limited computational resources, we opted for a data-centric approach, focusing on refining the training pipeline to maximize the training data that could be produced, rather than hyperparameter tuning and experimenting with neural network architectures. The final engine was trained on approximately 450,000 self-play games in roughly 150 hours. The final engine was deployed on the chess platform Lichess and achieved an ELOrating of 1,729, which corresponded to the top 10th percentile of Atomic Chess players on Lichess. These results demonstrate that it is possible to achieve a competitive Atomic Chess engine within a budget of 3,000 SEK for cloud computation. This shows that strong self-play reinforcement learning agents for niche games can be developed without requiring large-scale computing infrastructure. These results highlight the viability of accessible, low-budget AI research for underexplored game variants.
null
null
null
http://hdl.handle.net/20.500.12380/310683
null
2025
Adolfsson, Hannes and Lewis, David and Rahmn, Anton and Rajam{\"a}e, Sigge and Rungardt, Edvin and Tafani, Marco
A self-trained engine for a chess variant
thesis
adolfsson:2025:self-trained-engine-chess-variant
Bachelor's thesis
Abel, Andreas
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The board game market has experienced significant growth worldwide over the past decade, and Hungary is no exception. The aim of this study is to analyse the state of the Hungarian board game market from both financial and macroeconomic perspectives, and to compare it with international trends. The analysis is based on international statistical data, as well as revenue data and website traffic statistics of the largest board game publishers and distributors in Hungary. Additionally, we have considered the price fluctuations of specific board games. The data have been analysed and processed using a quantitative approach and rigorous statistical methods. Primary and secondary data analysis methods were employed to conduct a correlation study. The results shed light on the structure of the domestic market, as well as the extent to which Hungarian consumer behaviours follow international trends, and which economic factors influence revenue growth. By comparing with international data, the study also examines the factors that either support or hinder Hungarian market players in the global competition. The research further addresses changes in consumer behaviour, the rise of digital board games, and the post-effects of the COVID-19 pandemic, all of which have had a significant impact on the development directions of the industry. We also investigate the innovation opportunities for domestic developers and the growing importance of sustainability considerations in board game production.
board games, pricing, product markets
null
null
https://www.bankszovetseg.hu/gep-reszlet.cshtml?gepId=53&lang=eng
null
2025
Adorj\'{a}n, Bal\'{a}zs and Bedn\'{a}rik, \'{E}va
Statistical Analysis of Hungarian Board Game Sales
article
adorjan:2025:statistical-analysis-hungarian-board-games
null
null
https://bankszovetseg.hu/Public/gep/2025/2025_4_angol/453-484%20E%20Adorjan%20B%20Tarsasjatek%20eladasi%20statisztikak.pdf
10.33908/EF.2025.4.1
453--484
4
12
Economy and Finance
December
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Modern chess engines, such as Stockfish, have grown increasingly powerful at analyzing and calculating evaluations of given chess positions. Most chess engines are capable of not only providing several sequences of optimal moves which are most likely to yield the highest advantage, but also quantifying the perceived advantage for a side in a given position. Most evaluations of chess positions primarily involve a heuristic approximation of several factors, commonly learned through a deep learning approach, along with a branchlike exploration of possible moves. Although chess engines demonstrate extraordinary playing strength, their evaluations of positions are often limited to the advantage under optimal play, which may not be very applicable to the average player if those ideal moves are unlikely to be executed. More explicitly, the evaluation given by an engine is unlikely to accurately reflect the objective advantage of a human player, likely to make sub-optimal moves, and thus falls short of achieving an accurate representation of the given positional advantage.
Python; Multi-threaded; Research; Chess
null
https://github.com/paulxro/cse592_chess
https://paulaldea.com/project_pictures/chess_elo_file.pdf
null
2024
Aldea, Paul-Andrei and Bangarbale, Pranav and Li, Jerry
Advancing Chess Engine Design with Elo-Integrated Evaluation (EIE)
misc
aldea:2024:advancing-chess-engine-design-elo-integrated-evaluation-eie
null
null
null
null
null
null
null
null
null
Student project paper, CSE 592, University of Michigan
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The FIDE Laws of Chess establish that if a player runs out of time during a game, they lose unless there exists no sequence of legal moves that ends in a checkmate by their opponent, in which case the game is drawn. The problem of determining whether or not a given chess position is unwinnable for a certain player has been considered intractable by the community and, consequently, chess servers do not apply the above rule rigorously, thus unfairly classifying many games. We propose, to the best of our knowledge, the first algorithm for chess unwinnability that is sound, complete and efficient for practical use. We also develop a prototype implementation and evaluate it over the entire Lichess Database (containing more than 3 billion games), successfully identifying all unfairly classified games in the database.
null
null
https://github.com/miguel-ambrona/D3-Chess
https://doi.org/10.4230/LIPIcs.FUN.2022.2
11th International Conference on Fun with Algorithms, {FUN} 2022, May 30 to June 3, 2022, Island of Favignana, Sicily, Italy
2022
Miguel Ambrona
A Practical Algorithm for Chess Unwinnability
inproceedings
ambrona:2022:practical-algorithm-chess-unwinnability
null
null
https://chasolver.org/FUN22-full.pdf
10.4230/LIPICS.FUN.2022.2
2:1--2:20
null
226
null
null
null
Pierre Fraigniaud and Yushi Uno
https://chasolver.org/
LIPIcs
Schloss Dagstuhl - Leibniz-Zentrum f{\"{u}}r Informatik
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This report encompasses the implementation of two state-of-the-art machine learning algorithms for evaluating chess positions. The first algorithm makes use of artificial neural networks and manual feature representation thus closely following the implementation and architecture of Matthew Lai's Giraffe. Giraffe learns to play chess largely by self-play and derives its own rules based on the data [1]. Giraffe was implemented as a 7 class classification problem on a dataset of over 10,000 grandmaster level games. Four different implementations of Giraffe were explored covering two different architectures and the effects of regularization on the model performance. The second algorithm implemented goes through an unsupervised learning phase to perform feature extraction followed by a supervised learning phase thus replicating David Eli's DeepChess. DeepChess evaluates chess positions using a deep neural network without any a priori knowledge regarding the rules of chess. DeepChess is implemented as a siamese network of two disjoint deep belief networks connected to each other by fully connected layers [2]. This architecture was implemented as a binary classification problem on the same dataset as Giraffe and also on a larger dataset of LiChess games. Different implementations of DeepChess covering different training methodologies and parameter sets were executed.
null
null
null
https://dr.ntu.edu.sg/handle/10356/157572?mode=full
null
2022
Manav Arora
Deep learning for computer chess (part 1)
misc
arora:2022:deep-learning-computer-chess
null
null
null
null
null
null
null
null
null
Final Year Project (FYP)
null
null
null
Nanyang Technological University
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
WebAssembly (Wasm for short) brings a new, powerful capability to the web as well as Edge, IoT, and embedded systems. Wasm is a portable, compact binary code format with high performance and robust sandboxing properties. As Wasm applications grow in size and importance, the complex performance characteristics of diverse Wasm engines demand robust, representative benchmarks for proper tuning. Stopgap benchmark suites, such as PolyBenchC and libsodium, continue to be used in the literature, though they are known to be unrepresentative. Porting of more complex suites remains difficult because Wasm lacks many system APIs and extracting real-world Wasm benchmarks from the web is difficult due to complex host interactions. To address this challenge, we introduce Wasm-R3, the first record and replay technique for Wasm. Wasm-R3 transparently injects instrumentation into Wasm modules to record an execution trace from inside the module, then reduces the execution trace via several optimizations, and finally produces a replay module that is executable standalone without any host environment-on any engine. The benchmarks created by our approach are (i) realistic, because the approach records real-world web applications, (ii) faithful to the original execution, because the replay benchmark includes the unmodified original code, only adding emulation of host interactions, and (iii) standalone, because the replay benchmarks run on any engine. Applying Wasm-R3 to web-based Wasm applications in the wild demonstrates the correctness of our approach as well as the effectiveness of our optimizations, which reduce the recorded traces by 99.53\% and the size of the replay benchmark by 9.98\%. We release the resulting benchmark suite of 27 applications, called Wasm-R3-Bench, to the community, to inspire a new generation of realistic and standalone Wasm benchmarks.
Benchmarking, WebAssembly, record and replay
null
null
https://doi.org/10.1145/3689787
null
2024
Baek, Doehyun and Getz, Jakob and Sim, Yusung and Lehmann, Daniel and Titzer, Ben L. and Ryu, Sukyoung and Pradel, Michael
Wasm-R3: Record-Reduce-Replay for Realistic and Standalone WebAssembly Benchmarks
article
baek:2024:wasm-r3-webassembly-benchmarks
null
null
null
10.1145/3689787
null
OOPSLA2
8
Proc. ACM Program. Lang.
October
null
null
null
null
Association for Computing Machinery
27
347
October 2024
New York, NY, USA
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Generative sequence models are typically trained on sample sequences from natural or formal languages. It is a crucial question whether -- or to what extent -- sample-based training is able to capture the true structure of these languages, often referred to as the "world model". Theoretical results indicate that we can hope for soundness at best, that is, generating valid sequences, but not necessarily all of them. However, it is still important to have practical tools that are able to verify whether a given sequence model is sound. In this study, we focus on chess, as it is a domain that provides enough complexity while having a simple rule-based world model. We propose adversarial sequence generation for verifying the soundness of the sequence model. Our adversaries generate valid sequences so as to force the sequence model to generate an invalid next move prediction. Apart from the falsification of soundness, this method is also suitable for a more fine-grained analysis of the failure modes and the effects of different choices during training. To demonstrate this, we propose a number of methods for adversarial sequence generation and evaluate the approach on a large set of chess models. We train models on random as well as high-quality chess games, using several training recipes. We find that none of the models are sound, but some training techniques and dataset choices are able to improve soundness remarkably. We also investigate the potential application of board state probes in both our training and attack methods. Our findings indicate that the extracted board states have no causal role in next token prediction in most of the models.
generative sequence model, implicit world model, adversarial sequences, chess
null
https://github.com/szegedai/world-model-verification
https://openreview.net/forum?id=BLOIB8CwBI
The Fourteenth International Conference on Learning Representations
2026
Andr\'{a}s Balogh and M\'{a}rk Jelasity
Verification of the Implicit World Model in a Generative Model via Adversarial Sequences
inproceedings
balogh:2026:verification-implicit-world-model-generative-model-adversarial-sequences
null
null
https://arxiv.org/pdf/2602.05903
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This paper presents a data-driven statistical framework to quantify the role of skill in games, addressing the long-standing question of whether success in a game is predominantly driven by skill or chance. We analyze player level data from four popular games Chess, Rummy, Ludo, and Teen Patti, using empirical win statistics across varying levels of experience. By modeling win rate as a function of experience through a regression framework and employing empirical bootstrap resampling, we estimate the degree to which outcomes improve with repeated play. To summarize these dynamics, we propose a flexible skill score that emphasizes learning over initial performance, aligning with practical and regulatory interpretations of skill. Our results reveal a clear ranking, with Chess showing the highest skill component and Teen Patti the lowest, while Rummy and Ludo fall in between. The proposed framework is transparent, reproducible, and adaptable to other game formats and outcome metrics, offering potential applications in legal classification, game design, and player performance analysis.
Chance, Chess, Ludo, Rummy, Skill, Statistical Analysis, Teen Patti
null
null
https://doi.org/10.48550/arXiv.2410.14363
null
2024
Tathagata Banerjee and Anushka De and Subhamoy Maitra and Diganta Mukherjee
Skill vs. Chance Quantification for Popular Card & Board Games
article
banerjee:2024:skill-vs-chance-quantification-popular-card-board-games
null
null
null
10.48550/ARXIV.2410.14363
null
null
abs/2410.14363
CoRR
null
null
null
null
null
null
null
null
null
null
2410.14363
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Computer chess research has traditionally focused on creating the strongest possible chess engine. Recently, however, attempts have been made to create engines that mimic the playing strength and style of human players. Our research proposes enhancements of models developed in this vein that more accurately imitate master-level players, as well as improve the prediction accuracy of existing models on weaker players. Our proposed enhancements are simple to apply by post-processing the output of existing chess engines. The performance of our enhancements was evaluated and compared using two metrics, prediction accuracy and average centipawn loss. We found that using an ensemble model over search depths maximised prediction accuracy, while an evaluation window filtering approach was preferable with respect to average centipawn loss.
Artificial intelligence, Chess, Action prediction
null
null
https://doi.org/10.1007/978-3-031-54968-7_1
Advances in Computer Games - 18th International Conference, {ACG} 2023, Virtual Event, November 28-30, 2023, Revised Selected Papers
2023
Daniel Barrish and Steve Kroon and Brink van der Merwe
Making Superhuman {AI} More Human in Chess
inproceedings
barrish:2023:making-superhuman-ai-more-human
null
null
null
10.1007/978-3-031-54968-7_1
3--14
null
14528
null
null
null
Michael Hartisch and Chu{-}Hsuan Hsueh and Jonathan Schaeffer
null
Lecture Notes in Computer Science
Springer
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This article reports an investigation of the extent to which a chess program with an artificial intelligence component (i.e., Stockfish with NNUE) can identify 10 chess moves that are recognized as outstanding chess moves. Stockfish with NNUE was able to identify seven of the ten moves. Although Stockfish with NNUE is a very powerful chess program, it has some limitations in identifying creative chess moves. There is a discussion of those limitations.
Chess, Creative move, Creativity, Stockfish 15, Artificial intelligence
null
null
https://www.sciencedirect.com/science/article/pii/S271337452300016X
null
2023
William Bart
Can artificial intelligence identify creativity?: An empirical study
article
bart:2023:can-artificial-intelligence-identify-creativity
null
null
null
10.1016/j.yjoc.2023.100057
100057
2
33
Journal of Creativity
null
null
null
null
null
null
null
null
null
null
null
null
2713-3745
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Recent progress in machine learning has been fueled by increasing scale, enabling breakthroughs in domains such as image generation, natural language understanding, and decision-making. While tremendous improvements have been realized for low-risk applications like chat completion and recommendation, there are fundamental challenges that need to be addressed for high-risk applications like robotics and security. Specifically, this thesis is motivated by the following issues: (1) Collecting large datasets often relies on crowdsourcing, which inevitably introduces heterogeneous and potentially suboptimal data. (2) Applications that require coordination between multiple agents are often coupled with risk, making it challenging to use decentralized learning algorithms. (3) While neural networks have demonstrated tremendous predictive capabilities, they are inherently fragile to adversarial attacks. In this thesis, we present algorithms for combatting these challenges within the domains of Reinforcement Learning and Adversarial Machine Learning. The first contribution introduces two algorithms for imitation learning in suboptimal settings, demonstrating that by modeling the suboptimalities present, we can improve the learning framework. The second contribution proposes a mechanism to encourage cooperation in multi-agent reinforcement learning, demonstrating that by allowing agents to gift part of their reward, we can promote prosocial behavior in a decentralized fashion. The third contribution develops a robust classification algorithm designed for sparse attacks, demonstrating that by extending our theoretical insights from idealized settings, we can combat adversarial attacks in neural network classifiers. We test our proposed algorithms across applications in image-classification, game-playing, and robotics. Together, these contributions aim to advance the deployment of machine learning in real-world scenarios by considering practical problem settings, presenting novel theoretical and algorithmic insights, and validating these findings empirically.
null
null
null
null
null
2024
Beliaev, Mark
Towards Robust and Cooperative Learning Algorithms
thesis
beliaev:2024:robust-cooperative-learning-algorithms
Doctoral Thesis
null
null
null
null
null
null
null
null
https://escholarship.org/uc/item/6mc2d7q3
null
null
null
null
null
null
null
null
null
null
null
UC Santa Barbara
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Chess is widely played on computers, yet over-theboard (OTB) chess remains the official and preferred format for many players due to its tactile and immersive nature. Bridging digital and physical play requires accurate recognition of OTB positions. Prior research has explored modular pipelines for board localization, square occupancy, and piece classification, as well as single-stage detectors. While these approaches demonstrate strong accuracy in controlled conditions, they often accumulate errors across stages, face latency and robustness issues, and rarely support interactive play. Thus, in this work, we present Y-LIChess, a YOLO-based system for live, interactive OTB play with engines and online platforms. Y-LIChess employs semiautomatic calibration, event-triggered recognition, and legalityaware validation to ensure seamless, low-latency interaction. On our wood180 dataset, built with an active learning process to reduce manual annotation, Y-LIChess achieves 99.36 AP50 with only 0.21 \% per-square error, reconstructs 100 \% of boards within one mistake, and performs FEN reconstruction in ~7 ms GPU, more than an order of magnitude faster than prior pipelines.
YOLO;Accuracy;Pipelines;Games;Robustness;Calibration;Time factors;Synchronization;Image reconstruction;Engines;Chess recognition;YOLO;Stockfish;Lichess
null
null
null
2025 40th International Conference on Image and Vision Computing New Zealand (IVCNZ)
2025
Benitez-Garcia, Gibran and Takahashi, Hiroki
Y-LIChess: Live and Interactive Over-The-Board Chess Recognition and Play with Yolo
inproceedings
benitez-garcia:2025:ylichess-live-interactive-over-the-board-chess-recognition-play-yolo
null
null
null
10.1109/IVCNZ67716.2025.11281842
1--6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The Elo score has been extensively used to rank players by their skill or strength in competitive games such as chess, go, or StarCraft II. The Elo score implicitly assumes games have a strong additive--hence transitive--component. In this paper, we investigate the challenge of identifying transitive components in games. As a starting point, we show that the Elo score provably fails to extract the transitive component of some elementary transitive games. Based on this observation, we propose an alternative ranking system which properly extracts the transitive components in these games. Finally, we conduct an in-depth empirical validation on real-world game payoff matrices: it shows significant prediction performance improvements compared to the Elo score.
null
null
https://github.com/QB3/discrating
https://proceedings.mlr.press/v206/bertrand23a.html
International Conference on Artificial Intelligence and Statistics, 25-27 April 2023, Palau de Congressos, Valencia, Spain
2023
Quentin Bertrand and Wojciech Marian Czarnecki and Gauthier Gidel
On the Limitations of the Elo, Real-World Games are Transitive, not Additive
inproceedings
bertrand:2023:limitations-elo-real-world-games-transitive-not-additive
null
null
https://proceedings.mlr.press/v206/bertrand23a/bertrand23a.pdf
null
2905--2921
null
206
null
null
null
Francisco J. R. Ruiz and Jennifer G. Dy and Jan{-}Willem van de Meent
null
Proceedings of Machine Learning Research
{PMLR}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
When generating levels, algorithmically evaluating the results is essential. In this paper, we looked at predicting a level's difficulty and enjoyment. Past work has approached this problem for puzzle games like Sudoku by analyzing the characteristics of the initial level, the solved level, and the process that led to that solution. In this work, we examined a set of heuristics for Roguelike levels and their solutions, and their relationship to subjective player ratings of the levels. We gathered ratings of difficulty and enjoyment of levels in a study with 143 players. We ran an ablation study on the set of heuristics to find the best combination of heuristics for predicting difficulty and enjoyment with a linear regression model, and found solution path-based heursitics performed well. However, these models did not outperform a simple baseline for predicting enjoyment. Jaccard similarity on paths--a method we have not seen used in the field of game AI--was a useful predictor of difficulty. Testing proximity to enemies across a solution path is the only heuristic needed to predict how enjoyable a level will be.
difficulty, player study, procedural content generation
null
null
https://doi.org/10.1145/3649921.3659846
Proceedings of the 19th International Conference on the Foundations of Digital Games
2024
Biemer, Colan and Cooper, Seth
Solution Path Heuristics for Predicting Difficulty and Enjoyment Ratings of Roguelike Level Segments
inproceedings
biemer:2024:solution-path-heuristics-predicting-difficulty-enjoyment-ratings-roguelike-level-segments
null
null
null
10.1145/3649921.3659846
null
null
null
null
null
null
null
null
FDG '24
Association for Computing Machinery
8
69
null
New York, NY, USA
null
null
null
null
9798400709555
Worcester, MA, USA
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
We present a novel method to find chess positions similar to a given query position from a collection of chess games. We consider not only the static similarity resulting from the arrangement of chess pieces, but also the dynamic similarity involving the recognition of chess motifs and the tactical, dynamic aspects of position similarity. By encoding chess tactical problems as text documents, we use information retrieval techniques to enable efficient approximate searches. We have also developed a method for automatically generating tactical puzzles from a collection of chess games. We have experimentally shown the importance of including both static and dynamic features for successful recognition of similar chess motifs. The experiments have clearly shown that dynamic similarity plays a very important role in the evaluation of the similarity of chess motifs by both the program and chess experts.
Problem solving, Chess motifs, Automatic similarity recognition
null
null
https://doi.org/10.1007/978-3-031-11488-5_12
Advances in Computer Games: 17th International Conference, ACG 2021, Virtual Event, November 23–25, 2021, Revised Selected Papers
2021
Bizjak, Miha and Guid, Matej
Automatic Recognition of Similar Chess Motifs
inproceedings
bizjak:2021:automatic-recognition-similar-chess-motifs
null
null
null
10.1007/978-3-031-11488-5_12
131--141
null
null
null
null
null
null
null
null
Springer-Verlag
11
null
null
Berlin, Heidelberg
null
null
null
null
978-3-031-11487-8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
A common way for chess players to practice tactical awareness is to solve chess puzzles, consisting of an initial position and a sequence of moves to achieve a winning position. This practice is more effective when puzzles are matched to the player's skill level. In this work, we present an approach for estimating the difficulty of a chess puzzle using only the initial position and the sequence of correct moves. Our approach uses a fine-tuned modification of the Maia-2 model combined with a set of hand-crafted features and features extracted from chess engines such as Leela Chess Zero and Stockfish. All of these features are then used as input to a gradient boosted decision tree model that predicts the final rating of the puzzle. We applied our approach to the FedCSIS 2025 Challenge on Predicting Chess Puzzle Difficulty Part 2, where it achieved first place
null
null
null
http://dx.doi.org/10.15439/2025F6497
Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)
2025
Sebastian Bj\"{o}rkqvist
Estimating the Difficulty of Chess Puzzles by Combining Fine-Tuned Maia-2 with Hand-Crafted and Engine Features
inproceedings
bjorkqvist:2025:estimating-difficulty-chess-puzzles-combining-fine-tuned-maia-2-hand-crafted-engine-features
null
null
null
10.15439/2025F6497
801--806
null
43
null
null
null
Marek Bolanowski and Maria Ganzha and Leszek Maciaszek and Marcin Paprzycki and Dominik \'{S}l\k{e}zak
null
Annals of Computer Science and Information Systems
IEEE
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Over the last decade, the amount of data generated by software applications e.g. information systems, websites, mobile applications etc. has increased tremendously. Process mining, a subdiscipline of data science, uses this data to analyse and improve processes. In this research, the possibilities of process mining on chess event logs are explored, to ultimately improve the chess Elo system. The chess Elo system is a widely used and well accepted rating system. The Elo system is, however, flawed in multiple ways. Two major flaws of the Elo system, are its incapability to review a player?s strength and the excessive time needed to gain the appropriate Elo rating. This research explores the potential of process mining to identify chess expertise. To be more specific, multiple process mining techniques are applied on chess event logs, and the generated process models are analysed to identify chess expertise. This research presents a method to analyse the differences between high and low rated players. This is achieved by comparing process models generated from high and low rated chess games. The results show that by comparing the process models differences between high and low rated players can be observed. Process mining is therefore a promising approach to improve the Elo system and might be applicable to other software too. However, only the first twelve moves of a game were used. To gain more insight into the differences between high and low rated players, the mid and end games should be included in the event logs as well. Future research should be conducted with more chess games added to the event logs to increase the validity.
null
null
null
http://essay.utwente.nl/88571/
null
2021
Niels Bos
Improving the Chess Elo System With Process Mining
thesis
bos:2021:improving-chess-elo-system-process-mining
Bachelor's thesis
Faiza Allah Bukhsh
null
null
null
null
null
null
July
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
We detail the bread emoji team's submission to the FedCSIS 2025 Predicting Chess Puzzle Difficulty Challenge. Our solution revolved around improving our submission from the previous competition by incorporating a new puzzle metadata feature and optimizing our implementation to allow for larger model ensembles and more stable training. Similar to our submission from last year, our system has two stages: learning a strong predictor for the Lichess dataset and then rescaling the distribution using an empirically-guided post-processing step to fit it to the smaller and noisier competition dataset. Our submission placed second with a ~3.9\% gap in mean squared error (MSE) from first place in the final evaluation.
null
null
null
http://dx.doi.org/10.15439/2025F6771
Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)
2025
Tyler Woodruff and Luke Imbing and Marco Cognetta
The bread emoji Team's Submission to the 2025 FedCSIS Predicting Chess Puzzle Difficulty Challenge
inproceedings
bread-emoji-team-submission-2025-fedcsis-predicting-chess-puzzle-difficulty-challenge
null
null
null
10.15439/2025F6771
837--842
null
43
null
null
null
Marek Bolanowski and Maria Ganzha and Leszek Maciaszek and Marcin Paprzycki and Dominik \'{S}l\k{e}zak
null
Annals of Computer Science and Information Systems
IEEE
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Accurately estimating the difficulty of the chess puzzle is important for adaptive training systems, personalized recommendations, and large-scale content curation. Unlike engine evaluations optimized for perfect play, this task involves modeling human-perceived solving difficulty, typically expressed by Glicko-2 ratings. We present a multi-stage framework developed for the FedCSIS 2025 Challenge. The method trains four rating-banded neural regression models in different Elo ranges to capture localized difficulty patterns and reduce bias from unbalanced data. Their predictions are combined with statistical attributes, including success probabilities, failure distributions, and solution length, through a feature-based regression stage to improve cross-range generalization. A final calibration step adjusts the output to statistically plausible rating levels, mitigating systematic prediction biases without adding computational complexity. An additional mask selection procedure was explored as part of the competition extension to identify 10\% of the puzzles that are most likely to benefit from the refined evaluation. The proposed solution ranked in the final standings. These results demonstrate that a lightweight and interpretable regression pipeline can achieve competitive precision in modeling human-perceived chess puzzle difficulty.
null
null
null
http://dx.doi.org/10.15439/2025F4532
Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)
2025
Ling Cen and Jiahao Cen and Malin Song and Zhuliang Yu
A Multi-Stage Framework for Chess Puzzle Difficulty Prediction
inproceedings
cen:2025:multi-stage-framework-chess-puzzle-difficulty-prediction
null
null
null
10.15439/2025F4532
807--812
null
43
null
null
null
Marek Bolanowski and Maria Ganzha and Leszek Maciaszek and Marcin Paprzycki and Dominik \'{S}l\k{e}zak
null
Annals of Computer Science and Information Systems
IEEE
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Open-source software (OSS) projects, characterized by distributed development and volunteer contributions, face challenges in prioritizing user-centered design and usability. This difficulty arises because these projects are primarily driven by developers who focus on technical contributions. As a result, usability and user experience (UX) considerations are often neglected, leading to software that may not meet the needs of its broad and diverse users. To address this issue, we explore the potential of using user personas, which are fictional characters representing real user groups, to enhance user-centered design in OSS projects. Personas promote empathy and a deeper understanding of user needs, thereby improving alignment between developers and users. We conducted an experimental study on three OSS projects: Moodle, Lichess, and Audacity. Personas were created for each project and refined based on feedback from industry experts. Developers rated personas highly for credibility (86\%), consistency(79\%), and friendliness (86\%), highlighting their relevance in OSS projects. A follow-up experiment with students confirmed these findings, with consistency (79\%) demonstrating personas' role in improving usability and aligning developers with user needs. While adoption remains limited due to technical priorities (only 14\% of developers and 34\% of students found personas useful and expressed willingness to adopt them), personas show significant potential to enhance user-centered design in OSS. Further research is needed to understand developers' reluctance to adopt this technique and explore strategies to integrate personas more effectively into OSS workflows. This study's novelty lies in its empirical exploration of personas within OSS, providing quantitative evidence of their effectiveness in improving usability and user-centered design.
open-source software, user-centered design, usability, User persona, UX design
null
https://github.com/ChellyAhmed/personas-os-resources
https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2025.1457563
null
2025
Chelly, Ahmed and Hamza, Salma and Khan, Javed Ali
How Relevant Are Personas in Open-Source Software Development?
article
chelly:2025:how-relevant-personas-open-source-software-development
null
null
null
10.3389/fcomp.2025.1457563
null
null
Volume 7 - 2025
Frontiers in Computer Science
null
null
null
null
null
null
null
null
null
null
null
null
2624-9898
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Several recently introduced deep learning optimizers inspired by second-order methods have shown promising speedups relative to the current dominant optimizer AdamW, particularly in relatively small-scale experiments. However, efforts to validate and replicate their successes have reported mixed results, with some finding quickly diminishing advantage over AdamW with scale. In this work, we investigate how to scale second-order optimizers to achieve optimal performance at scale. Through theoretical and empirical analysis, we derive scaling rules for hyperparameters such as learning rate and weight decay as we scale up model width and depth for a wide range of optimizers, including Shampoo, SOAP, and Muon, accounting for the impact of commonly used techniques such as blocking and grafting. For compute-optimal scaling, we find scaling independent weight decay as 1/width is nearly optimal across optimizers, and that second-order optimizers have a substantially larger optimal model size compared to AdamW for a fixed compute budget. Applying these scaling rules, we show Muon achieves close to 1.4x or higher speedup over AdamW in training transformer language models, while incorrect scaling can decrease the speedup from 1.4x to below 1.1x from 190M to 640M parameter models.
second order optimization; scaling laws; maximum update paramterization; batch size scaling; depth scaling; critical batch size; compute optimal scaling
null
null
https://openreview.net/forum?id=Ei6IsmxYrb
The Thirty-ninth Annual Conference on Neural Information Processing Systems
2025
Zixi Chen and Shikai Qiu and Hoang Phan and Qi Lei and Andrew Gordon Wilson
How to Scale Second-Order Optimization
inproceedings
chen:2025:how-to-scale-second-order-optimization
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
We investigate how to scale second-order optimizers effectively, showing they outperform Adam and reduce data needs in compute-optimal transformer training.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The goal of FedCSIS 2025 Challenge is to build a model to predict the difficulty (measured as Lichess rating) of given chess puzzles. To address this task, we propose a three-stage joint visual–statistical framework for predicting Glicko-based difficulty ratings. In the first stage, a convolutional model based on MobileNetV2 integrates FEN-rendered board images with structured features, including engine-predicted success probabilities, move count, and piece counts, to generate baseline predictions. The second stage employs LightGBM to perform residual refinement, explicitly learning the residual errors of the baseline predictions to correct systematic biases, particularly for extreme difficulty levels. Finally, a domain-informed refinement adjusts the outputs toward interpretable difficulty estimates derived from failure probability distributions and rating-bucket inflection points. Our model ranked 9th in the challenge. Experimental results show that residual refinement and domain-informed adjustment significantly reduce mean squared error compared to the baseline visual–statistical model.
null
null
null
http://dx.doi.org/10.15439/2025F3227
Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)
2025
Junlin Chen and Cenru Liu and Yujie Gao
Multi-Modal Deep Learning with Residual and Structure-Guided Refinement for Chess Puzzle Difficulty Prediction
inproceedings
chen:2025:multi-model-deep-learning-residual-structure-guided-refinement-chess-puzzle-difficulty-prediction
null
null
null
10.15439/2025F3227
813--818
null
43
null
null
null
Marek Bolanowski and Maria Ganzha and Leszek Maciaszek and Marcin Paprzycki and Dominik \'{S}l\k{e}zak
null
Annals of Computer Science and Information Systems
IEEE
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Strength estimation and adjustment are crucial in designing human-AI interactions, particularly in games where AI surpasses human players. This paper introduces a novel strength system, including a strength estimator (SE) and an SE-based Monte Carlo tree search, denoted as SE-MCTS, which predicts strengths from games and offers different playing strengths with human styles. The strength estimator calculates strength scores and predicts ranks from games without direct human interaction. SE-MCTS utilizes the strength scores in a Monte Carlo tree search to adjust playing strength and style. We first conduct experiments in Go, a challenging board game with a wide range of ranks. Our strength estimator significantly achieves over 80\% accuracy in predicting ranks by observing 15 games only, whereas the previous method reached 49\% accuracy for 100 games. For strength adjustment, SE-MCTS successfully adjusts to designated ranks while achieving a 51.33\% accuracy in aligning to human actions, outperforming a previous state-of-the-art, with only 42.56\% accuracy. To demonstrate the generality of our strength system, we further apply SE and SE-MCTS to chess and obtain consistent results. These results show a promising approach to strength estimation and adjustment, enhancing human-AI interactions in games. Our code is available at https://rlg.iis.sinica.edu.tw/papers/strength-estimator.
Bradley-Terry Model, Strength Estimation, Strength Adjustment, Human-like Playing Style, Monte-Carlo Tree Search, Go, Chess
null
https://github.com/rlglab/strength-estimator/
https://openreview.net/forum?id=CvjXlsBLCX
The Thirteenth International Conference on Learning Representations
2025
Chun Jung Chen and Chung-Chin Shih and Ti-Rong Wu
Strength Estimation and Human-Like Strength Adjustment in Games
inproceedings
chen:2025:strength-estimation-human-like-strength-adjustment-games
null
null
null
null
null
null
null
null
null
null
null
https://rlg.iis.sinica.edu.tw/papers/strength-estimator/
null
null
null
null
null
null
null
null
null
null
null
null
This paper proposes a strength system that can estimate the strength from games and provide various playing strengths while simultaneously offer a human-like behavior in both Go and chess.
SEtheChessBot8, SEtheChessBot7, SEtheChessBot6, SEtheChessBot5, SEtheChessBot4, SEtheChessBot3, SEtheChessBot2, SEtheChessBot1, SEtheChessGod
https://rlg.iis.sinica.edu.tw/papers/strength-estimator/assets/Strength%20Estimation%20and%20Human-Like%20Strength%20Adjustment%20in%20Games%20Slides.pdf
https://rlg.iis.sinica.edu.tw/papers/strength-estimator/assets/Strength%20Estimation%20and%20Human-Like%20Strength%20Adjustment%20in%20Games%20Poster.pdf
https://rlg.iis.sinica.edu.tw/papers/strength-estimator/assets/models/chess.tar.gz
https://lichess.org/@/Dr_Kiwi/blog/a-humanlike-playstyle-chess-bot-for-every-level-player/pU3M8ya4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This paper presents the design, implementation, and evaluation of an innovative electronic chess board leveraging Hall-effect sensors for chess piece identification. Current electronic chess boards employ diverse technologies such as RFID, resistive switches, optical sensors, and computer vision, each with varying complexity and cost. The proposed solution utilizes Hall-effect sensors to detect changes in magnetic flux density caused by magnets embedded in chess pieces.The design features a modular 4x4 grid of sensors integrated on printed circuit boards, where each piece is differentiated by a unique magnet-spacer configuration. Experimental investigations validate the relationship between magnetic flux density and distance, modelled theoretically and tested with a field strength meter and the PCB system. Results confirm the accurate detection of piece positions and classes, despite minor interference from adjacent pieces. A Python-based system processes the sensor data, translating chess moves into live updates on the Li-Chess platform.The system has displayed good results in piece position identification accurately. Further improvement including as spacer configurations are to be done to implement piece classification. Future work aims to explore enhanced micro-controller integration, offering a streamlined and accessible alternative to existing technologies.
Magnetic flux density;Magnetic sensors;Printed circuits;Interference;Sensor phenomena and characterization;Robot sensing systems;Sensor systems;Sensors;Reliability;Magnets;electronic chessboard;analogue object identification;Hall-effect sensors;micro-controller
null
null
null
2025 7th International Congress on Human-Computer Interaction, Optimization and Robotic Applications (ICHORA)
2025
Cheong, Justin Julius Chin and Bhatia, Praneel and Krauledat, Matthias and Hartanto, Ronny
Design of Electronic Chess Board Using Analogue Hall-effect Sensors for Piece Identification
inproceedings
cheong:2025:design-electronic-chess-board-analogue-hall-effect-sensors-piece-identification
null
null
null
10.1109/ICHORA65333.2025.11016842
1--5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Traditional chess engines face a compelling dual challenge that significantly limits their practical utility for human chess education and training. First, engines like Stockfish, AlphaZero, and LeelaChess require computationally intensive tree-search algorithms, evaluating millions of positions per second to determine optimal moves. Second, and more critically, these systems fail to provide realistic opponents for human players across different skill levels. Current approaches create weaker opponents by simply limiting search depth, resulting in highly inconsistent play patterns that alternate unpredictably between brilliant strategic moves and inexplicable blunders--fundamentally unlike the systematic, consistent errors characteristic of human players at specific ELO ratings. This creates an unrealistic training environment where players practice against opponents that exhibit superhuman tactical vision followed by irrational mistakes, poorly preparing them for actual human competition.
null
null
null
https://cs224r.stanford.edu/projects/pdfs/CS224R_Final_Report__1_%20(3).pdf
null
2025
Choudhary, Prerit and Vagadia, Rikhil and Dhawan, Ankush
Human Chess: A Novel Searchless RL-based Chess Agent Capable of Multi-ELO Human-Like Play
misc
choudhary:2025:human-chess-novel-searchless-rl-chess-agent-capable-multi-elo-human-like-play
null
null
null
null
null
null
null
null
null
Final project for CS 224R Deep Reinforcement Learning, Spring 2025 https://cs224r.stanford.edu/projects/cs224r_final_projects.html
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
From sports to science, the recent availability of large-scale data has allowed to gain insights on the drivers of human innovation and success in a variety of domains. Here we quantify human performance in the popular game of chess by leveraging a very large dataset comprising of over 120 million games between almost 1 million players. We find that individuals encounter hot streaks of repeated success, longer for beginners than for expert players, and even longer cold streaks of unsatisfying performance. Skilled players can be distinguished from the others based on their gaming behaviour. Differences appear from the very first moves of the game, with experts tending to specialize and repeat the same openings while beginners explore and diversify more. However, experts experience a broader response repertoire, and display a deeper understanding of different variations within the same line. Over time, the opening diversity of a player tends to decrease, hinting at the development of individual playing styles. Nevertheless, we find that players are often not able to recognize their most successful openings. Overall, our work contributes to quantifying human performance in competitive settings, providing a first large-scale quantitative analysis of individual careers in chess, helping unveil the determinants separating elite from beginner performance.
null
null
null
https://doi.org/10.1038/s41598-023-27735-9
null
2023
Chowdhary, Sandeep and Iacopini, Iacopo and Battiston, Federico
Quantifying human performance in chess
article
chowdhary:2023:quantifying-human-performance-chess
null
null
null
10.1038/s41598-023-27735-9
2113
1
13
Scientific Reports
February
null
null
null
null
null
null
null
null
null
null
null
2045-2322
null
null
null
null
null
null
null
null
null
06
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
A chess opening is the preliminary stage of a chess game which typically consists of moves from formerly analysed openings. Opening strategy plays a crucial role in the entire game and decides the destiny of the middlegame and endgame. Here in this article, we attempted to introduce a method to predict the opening moves of a specific opponent. The technique analyses the past games played by a specific player to discover the most probable opening that the player is going to play in the subsequent games. The overall performance of this method is analysed and demonstrated by taking several factors such as transience, average turn and opponent's response in a given situation into consideration. This comparative study enables us to acquire the knowledge about the opening preferences of an opponent, which gives a strategic advantage to a chess player and helps them to develop a game strategy from the beginning of the game. In this article, we attempted to give an overview of how to predict the most favoured opening of a player and how we can utilise it for our benefit
Chess Opening, Opponent, middlegame, endgame.
null
null
null
null
2021
Chowdhury, Debarpan Bose and Sen, Banashree
Predicting Chess Opening Through Modelling Of Chess Opponents
article
chowdhury:2021:predicting-chess-openings-modelling-opponents
null
null
https://www.webology.org/data-cms/articles/20220815062132pmwebology%2018%20(6)%20-%20579.pdf
null
null
6
18
Webology (ISSN: 1735-188X)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Chess is becoming more popular and accessible by the day. For ins- tance, online chess enables matches between players from different parts of the world, bringing new ways of learning the game and interacting with other Web users. With this growth in popularity, there is a possibility to empower amateur players with rich computer analysis and tools, which may assist them in their learning process. One of the ways to analyze chess matches is through the study of errors. In this context, we present a new approach for the task of error pre- diction in chess. Our motivation is that knowing when players are likely to make a mistake is knowing what types of situations lead to difficulties in making the right decision. To that end, we add an abstraction layer to the already studied error prediction task, providing graph-based features to the machine learning models. Our results show a increase in the accuracy of the tested models, im- proving the results obtained in recent studies.
null
null
null
https://sol.sbc.org.br/index.php/eniac/article/view/18296
Anais do XVIII Encontro Nacional de Intelig\^{e}ncia Artificial e Computacional
2021
Giovanni Comarela and Davi Silva
A lightweight approach for predicting errors in chess matches
inproceedings
comarela:2021:lightweight-approach-prediction-errors-chess
null
null
https://sol.sbc.org.br/index.php/eniac/article/view/18296/18130
10.5753/eniac.2021.18296
703--714
null
null
null
null
null
null
null
null
SBC
null
null
null
Porto Alegre, RS, Brasil
null
null
2763-9061
null
null
Evento Online
null
null
null
null
null
null
null
PT
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
There is a long-held belief in the chess community that the player with the white pieces has an advantage in making the first move. This phenomenon has been observed repeatedly in over-the-board games between high-level players and professionals. However, less is known about the prevalence of white's advantage in games played between amateurs in more casual settings. This article attempts to identify a first-move advantage in chess by examining a large database of amateur games played online on a dedicated chess website. Win rates are calculated for various rating levels, and the influence of opening move choice is also explored. These results can help determine whether there is an inherent first-move advantage in chess observable for all players in multiple settings, or if this effect is exclusively seen with players of high skill during games played in person.
null
null
null
https://doi.org/10.1177/13896911251315903
null
2025
Tyler Cook
The Advantage of Moving First in Amateur Online Chess
article
cook:2025:advantage-moving-first-amateur-online-chess
null
null
null
10.1177/13896911251315903
null
null
null
ICGA Journal
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
We investigate the look-ahead capabilities of chess-playing neural networks, specifically focusing on the Leela Chess Zero policy network. We build on the work of Jenner et al. (2024) by analyzing the model's ability to consider future moves and alternative sequences beyond the immediate next move. Our findings reveal that the network's look-ahead behavior is highly context-dependent, varying significantly based on the specific chess position. We demonstrate that the model can process information about board states up to seven moves ahead, utilizing similar internal mechanisms across different future time steps. Additionally, we provide evidence that the network considers multiple possible move sequences rather than focusing on a single line of play. These results offer new insights into the emergence of sophisticated look-ahead capabilities in neural networks trained on strategic tasks, contributing to our understanding of AI reasoning in complex domains. Our work also showcases the effectiveness of interpretability techniques in uncovering cognitive-like processes in artificial intelligence systems.
model behavior attribution, look-ahead planning, mechanistic interpretability
null
null
https://doi.org/10.48550/arXiv.2505.21552
null
2025
Diogo Cruz
Understanding the learned look-ahead behavior of chess neural networks
article
cruz:2025:understanding-learned-look-ahead-behavior-chess-neural-networks
null
null
null
10.48550/ARXIV.2505.21552
null
null
abs/2505.21552
CoRR
null
keywords from rejected openreview submission: https://openreview.net/forum?id=OcBAd0JPxv&noteId=ZcMfmDpMpn
null
null
null
null
null
null
null
null
2505.21552
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Deep neural networks have been successfully applied in learning the board games Go, chess, and shogi without prior knowledge by making use of reinforcement learning. Although starting from zero knowledge has been shown to yield impressive results, it is associated with high computationally costs especially for complex games. With this paper, we present CrazyAra which is a neural network based engine solely trained in supervised manner for the chess variant crazyhouse. Crazyhouse is a game with a higher branching factor than chess and there is only limited data of lower quality available compared to AlphaGo. Therefore, we focus on improving efficiency in multiple aspects while relying on low computational resources. These improvements include modifications in the neural network design and training configuration, the introduction of a data normalization step and a more sample efficient Monte-Carlo tree search which has a lower chance to blunder. After training on 569537 human games for 1.5 days we achieve a move prediction accuracy of 60.4\%. During development, versions of CrazyAra played professional human players. Most notably, CrazyAra achieved a four to one win over 2017 crazyhouse world champion Justin Tan (aka LM Jann Lee) who is more than 400 Elo higher rated compared to the average player in our training set. Furthermore, we test the playing strength of CrazyAra on CPU against all participants of the second Crazyhouse Computer Championships 2017, winning against twelve of the thirteen participants. Finally, for CrazyAraFish we continue training our model on generated engine games. In 10 long-time control matches playing Stockfish 10, CrazyAraFish wins three games and draws one out of 10 matches.
deep learning, chess, crazyhouse, supervised learning, Monte-Carlo tree search
null
https://github.com/QueensGambit/CrazyAra
https://doi.org/10.3389/frai.2020.00024
null
2020
Johannes Czech and Moritz Willig and Alena Beyer and Kristian Kersting and Johannes F{\"{u}}rnkranz
Learning to Play the Chess Variant Crazyhouse Above World Champion Level With Deep Neural Networks and Human Data
article
czech:2020:learning-chess-variant-crazyhouse-above-world-champion-level-deep-neural-networks-human-data
null
null
https://public-pages-files-2025.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2020.00024/pdf
10.3389/FRAI.2020.00024
24
null
3
Frontiers Artif. Intell.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
https://github.com/QueensGambit/CrazyAra/wiki/Stockfish-10:-Crazyhouse-Self-Play
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The AlphaZero algorithm has been successfully applied in a range of discrete domains, most notably board games. It utilizes a neural network that learns a value and policy function to guide the exploration in a Monte-Carlo Tree Search. Although many search improvements such as graph search have been proposed for Monte-Carlo Tree Search in the past, most of them refer to an older variant of the Upper Confidence bounds for Trees algorithm that does not use a policy for planning. We improve the search algorithm for AlphaZero by generalizing the search tree to a directed acyclic graph. This enables information flow across different subtrees and greatly reduces memory consumption. Along with Monte-Carlo Graph Search, we propose a number of further extensions, such as the inclusion of Epsilon-Greedy exploration, a revised terminal solver and the integration of domain knowledge as constraints. In our empirical evaluations, we use the CrazyAra engine on chess and crazyhouse as examples to show that these changes bring significant improvements to AlphaZero.
Classical Planning Techniques And Analysis, Applications And Case Studies Of Planning And Scheduling Techniques, Learning For Planning And Scheduling, Multi-agent And Distributed Planning
null
null
https://ojs.aaai.org/index.php/ICAPS/article/view/15952
Proceedings of the Thirty-First International Conference on Automated Planning and Scheduling, {ICAPS} 2021, Guangzhou, China (virtual), August 2-13, 2021
2021
Johannes Czech and Patrick Korus and Kristian Kersting
Improving AlphaZero Using Monte-Carlo Graph Search
inproceedings
czech:2021:improving-alphazero-monte-carlo-graph-search
null
null
https://ojs.aaai.org/index.php/ICAPS/article/view/15952/15763
null
103--111
null
null
null
null
null
Susanne Biundo and Minh Do and Robert Goldman and Michael Katz and Qiang Yang and Hankz Hankui Zhuo
null
null
{AAAI} Press
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Machine learning (ML) systems across many application areas are increasingly demonstrating performance that is beyond that of humans. In response to the proliferation of such models, the field of Explainable AI (XAI) has sought to develop techniques that enhance the transparency and interpretability of machine learning methods. In this work, we consider a question not previously explored within the XAI and ML communities: Given a computational system whose performance exceeds that of its human user, can explainable AI capabilities be leveraged to improve the performance of the human? We study this question in the context of the game of Chess, for which computational game engines that surpass the performance of the average player are widely available. We introduce the Rationale-Generating Algorithm, an automated technique for generating rationales for utility-based computational methods, which we evaluate with a multi-day user study against two baselines. The results show that our approach produces rationales that lead to statistically significant improvement in human task performance, demonstrating that rationales automatically generated from an AI's internal task model can be used not only to explain what the system is doing, but also to instruct the user and ultimately improve their task performance.
explainable AI, machine learning
null
null
https://doi.org/10.1145/3377325.3377512
Proceedings of the 25th International Conference on Intelligent User Interfaces
2020
Das, Devleena and Chernova, Sonia
Leveraging rationales to improve human task performance
inproceedings
das:2020:leveraging-rationales-human-task-performance
null
null
null
10.1145/3377325.3377512
510–518
null
null
null
null
null
null
null
IUI '20
Association for Computing Machinery
9
null
null
New York, NY, USA
null
null
null
null
9781450371186
Cagliari, Italy
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Mechanistic interpretability (MI) studies aim to identify the specific neural pathways that underlie decision-making in neural networks. Here we analyze both the horizontal and vertical information flows of a chess-playing transformer. This paper introduces a new taxonomy of chessboard attention patterns that synchronize to guide move selection. Our findings show that the early layers of the chess transformer correctly identify moves that are highly ranked by the final layer. Experiments conducted on human chess players laid the foundation for much of our current understanding of human problem-solving, cognition, and visual memory. We believe that the study of chess language transformers may be an equally fruitful research area for AGI systems.
chess cognition, mechanistic interpretability, transformers
null
null
https://doi.org/10.1007/978-3-031-65572-2_7
Artificial General Intelligence: 17th International Conference, AGI 2024, Seattle, WA, USA, August 13–16, 2024, Proceedings
2024
Davis, Austin L. and Sukthankar, Gita
Decoding Chess Mastery: A Mechanistic Analysis of a Chess Language Transformer Model
inproceedings
davis:2024:decoding-chess-mastery-mechanistic-analysis-chess-language-transformer-model
null
null
null
10.1007/978-3-031-65572-2_7
63--72
null
null
null
null
null
null
null
null
Springer-Verlag
10
null
null
Berlin, Heidelberg
null
null
null
null
978-3-031-65571-5
SEATTLE, WA, USA
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Probing classifiers are a technique for understanding and modifying the operation of neural networks in which a smaller classifier is trained to use the model's internal representation to learn a probing task. Similar to a neural electrode array, probing classifiers help both discern and edit the internal representation of a neural network. This paper evaluates the use of probing classifiers to modify the internal hidden state of a chess-playing transformer. The weights of the learned linear classifiers are very informative and can be used to reliably delete pieces from the board showing that the model internally maintains an editable emergent representation of game state.
Representation Engineering; Probing Classifiers; Chess-playing Language Models, GPT
null
https://github.com/austinleedavis/icmla-2024
null
2024 International Conference on Machine Learning and Applications (ICMLA)
2024
Davis, Austin L and Sukthankar, Gita
Hidden Pieces: An Analysis of Linear Probes for GPT Representation Edits
inproceedings
davis:2024:hidden-pieces-analysis-linear-probes-gpt-representation-edits
null
null
https://ial.eecs.ucf.edu/pdf/Sukthankar-Austin-ICMLA2024.pdf
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This dissertation investigates the structures and mechanisms underpinning the latent space representations that emerge within Generative Pretrained Transformer (GPT) models. Addressing the broader goal of enhancing AI trustworthiness through transparency, accountability, and controllability, we focus on techniques to understand, quantify, and manipulate these latent space representations. Through a series of analyses, we examine several chess-playing GPT models as controlled testbeds, leveraging their structured decision space to explore emergent representations and decision-making processes. Key contributions include a mechanistic analysis of the attention heads and latent representations, the development of novel metrics for evaluating intervention outcomes, and the application of linear probe classifiers to decode and edit the model's internal world representations. Analysis of the probe weight vectors reveals that the chess-playing GPT developed an emergent world model of the game that includes pieces, positions, and movement rules, and provides empirical support for the linear representation hypothesis--the idea that abstract concepts are encoded as specific directions in the model's hidden state space. Complementary analysis of the hidden state vectors demonstrates that the model's internal representations honor the Markovian property of chess. Experimental results demonstrate that linear interventions can causally steer GPT outputs while preserving their semantic validity. Drawing on the dose-response analogy from medicine, we vary both the strength and position of interventions, showing that output quality is maximized when intervention strength follows an exponentially decaying schedule across token positions. Similar experiments using sparse autoencoders in place of linear probes yielded significantly poorer performance. These results highlight the effectiveness of simple linear probes as valuable tools for interpretability and control.
Latent Representation Editing; Mechanistic Interpretability; Linear Probes; Chess Language Models; Artificial Intelligence
null
null
https://stars.library.ucf.edu/etd2024/285
null
2025
Davis, Austin
Interpretation and Control of AI Model Behavior Through Direct Adjustment of Latent Representations
thesis
davis:2025:interpretation-control-ai-model-behavior-direct-adjustment-latent-representations
PhD thesis
Sukthankar, Gita
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
University of Central Florida
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Probing classifiers are a technique for understanding and modifying the operation of neural networks in which a smaller classifier is trained to use the model's internal representation to learn a probing task. Similar to a neural electrode array, probing classifiers help both discern and edit the internal representation of a neural network. This article evaluates the use of probing classifiers to modify the internal hidden state of a chess-playing transformer. We contrast the performance of standard linear probes against Sparse Autoencoders (SAEs), a latent space interpretability technique designed to decompose polysemantic concepts into atomic features via an overcomplete basis. Our experiments demonstrate that linear probes trained directly on the residual stream significantly outperform probes based on SAE latents. When quantifying the success of interventions via the probability of legal moves, linear probe edits achieved an 88\% success rate, whereas SAE-based edits yielded only 41\%. These findings suggest that while SAEs are valuable for specific interpretability tasks, they do not enhance the controllability of hidden states compared to raw vectors. Finally, we show that the residual stream respects the Markovian property of chess, validating the feasibility of applying consistent edits across different time steps for the same board state.
representation engineering; probing classifiers; chess; language models; GPT; sparse autoencoders
null
https://github.com/austinleedavis/icmla-2024
https://doi.org/10.20944/preprints202601.2229.v1
null
2026
Austin L. Davis and Robinson Vasquez Ferrer and Gita Sukthankar
Exploring the Limits of Probes for Latent Representation Edits in GPT Models
article
davis:2026:exploring-limits-probes-latent-representation-edits-gpt-models
null
null
null
10.20944/preprints202601.2229.v1
null
null
null
Preprints
January
null
null
null
null
Preprints
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Chess is a centuries-old game that continues to be widely played worldwide. Opening Theory is one of the pillars of chess and requires years of study to be mastered. In this paper, we use the games played in an online chess platform to exploit the ``wisdom of the crowd'' and answer questions traditionally tackled only by chess experts. We first define a relatedness network of chess openings that quantifies how similar two openings are to play. Using this network, we identify communities of nodes corresponding to the most common opening choices and their mutual relationships. Furthermore, we demonstrate how the relatedness network can be used to forecast future openings players will start to play, with back-tested predictions outperforming a random predictor. We then apply the Economic Fitness and Complexity algorithm to measure the difficulty of openings and players' skill levels. Our study not only provides a new perspective on chess analysis but also opens the possibility of suggesting personalized opening recommendations using complex network theory.
null
null
null
https://doi.org/10.1038/s41598-023-31658-w
null
2023
De Marzo, Giordano and Servedio, Vito D. P.
Quantifying the complexity and similarity of chess openings using online chess community data
article
de-marzo:2023:complexity-similarity-chess-openings-community-data
null
null
null
10.1038/s41598-023-31658-w
5327
1
13
Scientific Reports
April
null
null
null
null
null
null
null
null
null
null
null
2045-2322
null
null
null
null
null
null
null
null
null
01
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Chess pieces recognition using computer vision is a problem generally approached in various ways, with different kinds of results and complexity. Deep learning is a state of the art approach to solve problems on image recognition although facing necessity of huge data sets. This paper discusses a method to identify synthetically generated chess images on Blender using its Python API via fine-tuning of VGG16 convolutional network obtaining close to 97\% accuracy on piece classification. Possible applications include automated record of real chess games and real-time play between online players using real boards.
Histograms, Image color analysis, Games, Tracking, Image edge detection, Machine learning, Task analysis, chess, neural networks, piece recognition, synthetic data generation
null
https://github.com/rafaelmcam/cChess
null
2019 21st Symposium on Virtual and Augmented Reality (SVR)
2019
de S\'{a} Delgado Neto, Afonso and Mendes Campello, Rafael
Chess Position Identification using Pieces Classification Based on Synthetic Images Generation and Deep Neural Network Fine-Tuning
inproceedings
de-sa-delgado-neto:2019:chess-position-identification
null
null
null
10.1109/SVR.2019.00038
152--160
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
https://github.com/rafaelmcam/cChess/tree/master/Jogos
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
In reinforcement learning, Transformers have been shown to be powerful models for multi-task policy distillation and, to a lesser extent, policy improvement via return interventions within frameworks such as Decision Transformers. These recent results are somewhat atypical for reinforcement learning, as they do not rely on the learning of a value function, which is usually at the heart of most traditional approaches. In this paper, we explore a principled approach to purely generative value function approximation with Transformers, opening the way for existing techniques to be applied for policy improvement. Importantly, unlike other RL methods, this generative approach allows us to kickstart the learning process by fine-tuning strong pretrained state predictors, such as foundation models, substantially shortening the training time. We showcase the potential of our approach by constructing an action-value function for chess that can play at the level of an expert human and over 400 Elo stronger than direct behavioural cloning.
reinforcement learning, transformers, policy evaluation, policy improvement, sequence modeling, compression
null
null
https://openreview.net/forum?id=6qtDu7hVPF
null
2024
Gregoire Deletang and Anian Ruoss and Li Kevin Wenliang and Elliot Catt and Tim Genewein and Jordi Grau-Moya and Marcus Hutter and Joel Veness
Generative Reinforcement Learning with Transformers
misc
deletang:2024:generative-reinforcement-learning-with-transformers
null
null
https://openreview.net/pdf?id=6qtDu7hVPF
null
null
null
null
null
null
Submitted to ICLR 2024
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
https://openreview.net/attachment?id=6qtDu7hVPF&name=supplementary_material
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The capabilities of today{'}s natural language processing systems are typically evaluated using large datasets of curated questions and answers. While these are critical benchmarks of progress, they also suffer from weakness due to artificial distributions and incomplete knowledge. Artifacts arising from artificial distributions can overstate language model performance, while incomplete knowledge limits fine-grained analysis. In this work, we introduce a complementary benchmarking approach based on SimPlified Language Activity Traces (SPLAT). SPLATs are corpora of language encodings of activity in some closed domain (we study traces from chess and baseball games in this work). SPLAT datasets use naturally-arising distributions, allow the generation of question-answer pairs at scale, and afford complete knowledge in their closed domains. We show that language models of three different architectures can answer questions about world states using only verb-like encodings of activity. Our approach is extensible to new language models and additional question-answering tasks.
null
null
null
https://aclanthology.org/2021.conll-1.16
Proceedings of the 25th Conference on Computational Natural Language Learning
2021
Demeter, David and Downey, Doug
Who{'}s on First?: Probing the Learning and Representation Capabilities of Language Models on Deterministic Closed Domains
inproceedings
demeter:2021:probing-learning-representation-language-models-closed-domains
null
null
null
10.18653/v1/2021.conll-1.16
210--222
null
null
null
November
null
Bisazza, Arianna and Abend, Omri
null
null
Association for Computational Linguistics
null
null
null
Online
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Creating a chess engine using language models is challenging due to the complexity of understanding the logic and depth of moves. To solve this problem, we propose several approaches with different types of training of different LLMs that already have some knowledge of tasks and not necessarily on chess, and adjust them for faster learning. We propose three groups of methods and five approaches to enhance state-of-the-art results. Our approaches will involve different types of database manipulation, the training of different types of networks and the application of a new, widely used approach, the RAG (Retrieval-Augmented Generation). We demonstrated the potential of networks and approaches with promising results for the first 2 methods, and we demonstrated with RAG in combination with FAISS (Facebook AI Similarity Search) [1], algebraic notation and GPT3.5-turbo-instruct that it is possible to increase the ability of networks to predict good chess moves. The first approach demonstrated that successive move generation benefits generalization and reduces illegal moves; for the second approach, we proved that the use of noiseless algebraic notation gives the network the possibility of progressing significantly before giving illegal moves. The second group of approaches, demonstrated that using the FEN (Forsyth-Edwards Notation), a textual graphical representation, for the prediction of move sequences to force context understanding could give promising results with a low number of epochs. Even with limitations in computation time and epochs, we were able to find promising results with a part of our dataset that can be improved with more epochs and data.
Training, Deep learning, Social networking (online), Large language models, Retrieval augmented generation, Force, Transformers, Logic, Engines, Tuning, Deep learning, Transformers ,LLMs, RAG, FAISS, GPT, Similarity Search, Chess engine, Fine tuning
null
null
null
SoutheastCon 2025
2025
Diallo, Kassim B. and Akhloufi, Moulay A.
ChessMoveLLM: Large Language Models for Chess Next Move Prediction
inproceedings
diallo:2025:chessmovellm-large-language-models-chess-next-move-prediction
null
null
null
10.1109/SoutheastCon56624.2025.10971611
475--480
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
While generalization over tasks from easy to hard is crucial to profile language models (LLMs), the datasets with fine-grained difficulty annotations for each problem across a broad range of complexity are still blank. Aiming to address this limitation, we present Easy2Hard-Bench, a consistently formatted collection of 6 benchmark datasets spanning various domains, such as mathematics and programming problems, chess puzzles, and reasoning questions. Each problem within these datasets is annotated with numerical difficulty scores. To systematically estimate problem difficulties, we collect abundant performance data on attempts to each problem by humans in the real world or LLMs on the prominent leaderboard. Leveraging the rich performance data, we apply well-established difficulty ranking systems, such as Item Response Theory (IRT) and Glicko-2 models, to uniformly assign numerical difficulty scores to problems. Moreover, datasets in Easy2Hard-Bench distinguish themselves from previous collections by a higher proportion of challenging problems. Through extensive experiments with six state-of-the-art LLMs, we provide a comprehensive analysis of their performance and generalization capabilities across varying levels of difficulty, with the aim of inspiring future research in LLM generalization. The datasets are available at https://huggingface.co/datasets/furonghuang-lab/Easy2Hard-Bench.
null
null
null
https://www.microsoft.com/en-us/research/publication/easy2hard-bench-standardized-difficulty-labels-for-profiling-llm-performance-and-generalization/
NeurIPS 2024
2024
Ding, Mucong and Deng, Chenghao and Choo, Jocelyn and Wu, Zichu and Agrawal, Aakriti and Schwarzschild, Avi and Zhou, Tianyi and Goldstein, Tom and Langford, John and Anandkumar, A. and Huang, Furong
Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization
inproceedings
ding:2024:easy2hard-bench
null
null
null
null
null
null
null
null
September
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
In the previous chapter, we explored asynchronous programming with asyncio, learning how to handle multiple I/O-bound operations efficiently within a single thread.
null
null
null
https://doi.org/10.1007/979-8-8688-1261-3_17
Deep Dive Python: Techniques and Best Practices for Developers
2025
Divakaran, Adarsh
Data Serialization and Persistence
inbook
divakaran:2025:data-serialization-persistence-deep-dive-python-techniques-best-practices-developers
null
null
null
10.1007/979-8-8688-1261-3_17
531--588
null
null
null
null
null
null
null
null
Apress
null
null
null
Berkeley, CA
null
null
null
null
979-8-8688-1261-3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Chess is a complex game that requires deep strategic thinking, pattern recognition, calculations, and creative problem-solving. Modeling strategic decision-making in chess endgames poses a unique challenge due to the game's high complexity and the uncertainty of human play. In this paper, we propose a hybrid analytical framework that combines combinatorial game theory, graph-theoretic game-tree analysis, and probabilistic modeling to explore optimal strategies in simplified endgame positions. Using real examples including games from the 2024 FIDE World Chess Championship match between Gukesh and Ding, we construct and analyze game trees to evaluate strategies under probabilistic distributions of plausible human responses. Our approach extends traditional models by incorporating uncertainty into the decision-making process, allowing for a richer understanding of practical play in high-stakes scenarios. This framework offers a bridge between theoretical rigor and real-world applicability in chess and beyond.
combinatorial game theory; CGT; optimal strategy; probabilistic modeling; graph-theoretic game tree analysis; chess
null
null
https://nhsjs.com/2025/beyond-perfect-play-a-combinatorial-and-probabilistic-approach-to-chess-endgame-strategy/
null
2025
Divij Dogra
Beyond Perfect Play: A Combinatorial and Probabilistic Approach to Chess Endgame Strategy
article
dogra:2025:beyond-perfect-play-combinatorial-probabilistic-approach-chess-endgame-strategy
null
null
null
null
null
null
null
The National High School Journal of Science
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Large language models often possess latent capabilities that lie dormant unless explicitly elicited, or surfaced, through fine-tuning or prompt engineering. Predicting, assessing, and understanding these latent capabilities pose significant challenges in the development of effective, safe AI systems. In this work, we recast elicitation as an information-constrained fine-tuning problem and empirically characterize upper bounds on the minimal number of parameters needed to achieve specific task performances. We find that training as few as 10-100 randomly chosen parameters--several orders of magnitude fewer than state-of-the-art parameter-efficient methods--can recover up to 50\% of the performance gap between pretrained-only and full fine-tuned models, and 1,000s to 10,000s of parameters can recover 95\% of this performance gap. We show that a logistic curve fits the relationship between the number of trained parameters and model performance gap recovery. This scaling generalizes across task formats and domains, as well as model sizes and families, extending to reasoning models and remaining robust to increases in inference compute. To help explain this behavior, we consider a simplified picture of elicitation via fine-tuning where each trainable parameter serves as an encoding mechanism for accessing task-specific knowledge. We observe a relationship between the number of trained parameters and how efficiently relevant model capabilities can be accessed and elicited, offering a potential route to distinguish elicitation from teaching.
elicitation, large language models, LLMs, latent capabilities, minimum description length
null
https://github.com/edonoway/quantifying-elicitation-neurips25
https://openreview.net/forum?id=Dkgx2pS4Ww
The Thirty-ninth Annual Conference on Neural Information Processing Systems
2025
Elizabeth Donoway and Hailey Joren and Arushi Somani and Henry Sleight and Julian Michael and Michael R DeWeese and John Schulman and Ethan Perez and Fabien Roger and Jan Leike
Quantifying Elicitation of Latent Capabilities in Language Models
inproceedings
donoway:2025:quantifying-elicitation-latent-capabilities-language-models
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
With the growing capabilities of AI, technology is increasingly able to match or even surpass human performance. In the current study, focused on the game of chess, we investigated whether chess players could distinguish if they were playing against a human or a computer, and how they achieved this. A total of 24 chess players each played eight 5+0 Blitz games from different starting positions. They played against (1) a human, (2) Maia, a neural network-based chess engine trained to play in a human-like manner, (3) Stockfish 16, the best chess engine available, downgraded to play at a lower level, and (4) Stockfish 16 at its maximal level. The opponent's move time was fixed at 10 seconds. During the game, participants verbalized their thoughts, and after each game, they indicated by means of a questionnaire whether they thought they had played against a human or a machine and if there were particular moves that revealed the nature of the opponent. The results showed that Stockfish at the highest level was usually correctly identified as an engine, while Maia was often incorrectly identified as a human. The moves of the downgraded Stockfish were relatively often labeled as `strange' by the participants. In conclusion, the Turing test, as applied here in a domain where computers can perform superhumanly, is essentially a test of whether the chess computer can devise suboptimal moves that correspond to human moves, and not necessarily a test of computer intelligence.
null
null
null
https://www.sciencedirect.com/science/article/pii/S2451958824001295
null
2024
Yke Bauke Eisma and Robin Koerts and Joost {de Winter}
Turing Tests in Chess: An Experiment Revealing the Role of Human Subjectivity
article
eisma:2024:turing-tests-chess-human-subjectivity
null
null
null
10.1016/j.chbr.2024.100496
100496
null
null
Computers in Human Behavior Reports
null
null
null
null
null
null
null
null
null
null
null
null
2451-9588
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
When solving decision-making tasks, humans typically depend on information from two key sources: (1) Historical policy data, which provides interaction replay from the environment, and (2) Analytical insights in natural language form, exposing the invaluable thought process or strategic considerations. Despite this, the majority of preceding research focuses on only one source: they either use historical replay exclusively to directly learn policy or value functions, or engaged in language model training utilizing mere language corpus. In this paper, we argue that a powerful autonomous agent should cover both sources. Thus, we propose ChessGPT, a GPT model bridging policy learning and language modeling by integrating data from these two sources in Chess games. Specifically, we build a large-scale game and language dataset related to chess. Leveraging the dataset, we showcase two model examples ChessCLIP and ChessGPT, integrating policy learning and language modeling. Finally, we propose a full evaluation framework for evaluating language model's chess ability. Experimental results validate our model and dataset's effectiveness. We open source our code, model, and dataset at https://github.com/waterhorse1/ChessGPT.
null
https://huggingface.co/Waterhorse/ChessCLIP,https://huggingface.co/Waterhorse/chessgpt-base-v1,https://huggingface.co/Waterhorse/chessgpt-chat-v1
https://github.com/waterhorse1/ChessGPT
null
Proceedings of the 37th International Conference on Neural Information Processing Systems
2023
Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun
ChessGPT: bridging policy learning and language modeling
inproceedings
feng:2023:chessgpt-bridging-policy-learning-language-modeling
null
null
https://proceedings.neurips.cc/paper_files/paper/2023/file/16b14e3f288f076e0ca73bdad6405f77-Paper-Datasets_and_Benchmarks.pdf
null
null
null
null
null
null
null
null
null
NeurIPS '23
Curran Associates Inc.
47
316
null
Red Hook, NY, USA
null
null
null
null
null
New Orleans, LA, USA
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
While Generative AI rapidly advances in various domains, generating truly creative, aesthetic, and counter-intuitive outputs remains a challenge. This paper presents an approach to tackle these difficulties in the domain of chess puzzles. We start by benchmarking Generative AI architectures, and then introduce an RL framework with novel rewards based on chess engine search statistics to overcome some of those shortcomings. The rewards are designed to enhance a puzzle's uniqueness, counter-intuitiveness, diversity, and realism. Our RL approach dramatically increases counter-intuitive puzzle generation by 10x, from 0.22\% (supervised) to 2.5\%, surpassing existing dataset rates (2.1\%) and the best Lichess-trained model (0.4\%). Our puzzles meet novelty and diversity benchmarks, retain aesthetic themes, and are rated by human experts as more creative, enjoyable, and counter-intuitive than composed book puzzles, even approaching classic compositions. Our final outcome is a curated booklet of these AI-generated puzzles, which is acknowledged for creativity by three world-renowned experts.
null
null
null
https://arxiv.org/abs/2510.23881
null
2025
Xidong Feng and Vivek Veeriah and Marcus Chiam and Michael Dennis and Ryan Pachauri and Thomas Tumiel and Federico Barbero and Johan Obando-Ceron and Jiaxin Shi and Satinder Singh and Shaobo Hou and Nenad Toma\v{s}ev and Tom Zahavy
Generating Creative Chess Puzzles
misc
feng:2025:generating-creative-chess-puzzles
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2510.23881
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
cs.AI
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Can we learn more from data than existed in the generating process itself? Can new and useful information be constructed from merely applying deterministic transformations to existing data? Can the learnable content in data be evaluated without considering a downstream task? On these questions, Shannon information and Kolmogorov complexity come up nearly empty-handed, in part because they assume observers with unlimited computational capacity and fail to target the useful information content. In this work, we identify and exemplify three seeming paradoxes in information theory: (1) information cannot be increased by deterministic transformations; (2) information is independent of the order of data; (3) likelihood modeling is merely distribution matching. To shed light on the tension between these results and modern practice, and to quantify the value of data, we introduce epiplexity, a formalization of information capturing what computationally bounded observers can learn from data. Epiplexity captures the structural content in data while excluding time-bounded entropy, the random unpredictable content exemplified by pseudorandom number generators and chaotic dynamical systems. With these concepts, we demonstrate how information can be created with computation, how it depends on the ordering of the data, and how likelihood modeling can produce more complex programs than present in the data generating process itself. We also present practical procedures to estimate epiplexity which we show capture differences across data sources, track with downstream performance, and highlight dataset interventions that improve out-of-distribution generalization. In contrast to principles of model selection, epiplexity provides a theoretical foundation for data selection, guiding how to select, generate, or transform data for learning systems.
null
null
null
https://arxiv.org/abs/2601.03220
null
2026
Marc Finzi and Shikai Qiu and Yiding Jiang and Pavel Izmailov and J. Zico Kolter and Andrew Gordon Wilson
From Entropy to Epiplexity: Rethinking Information for Computationally Bounded Intelligence
misc
finzi:2026:entropy-epiplexity-rethinking-information-computationally-bounded-intelligence
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2601.03220
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
cs.LG
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Modern chess language models are dense transformers trained on millions of games played by thousands of high-rated individuals. However, these monolithic networks tend to collapse into mode-averaged behavior, where stylistic boundaries are blurred, and rare but effective strategies are suppressed. To counteract homogenization, we introduce Mixture-of-Masters (MoM), the first chess mixture-of-experts model with small-sized GPT experts emulating world-class grandmasters. Each expert is trained with a combination of self-supervised learning and reinforcement learning guided by chess-specific rewards. For each move, a post-hoc learnable gating network selects the most appropriate persona to channel depending on the game state, allowing MoM to switch its style dynamically--e.g., Tal's offensive vocation or Petrosian's defensive solidity. When evaluated against Stockfish on unseen standard games, MoM outperforms both dense individual expert networks and popular GPT baselines trained on aggregated data, while ensuring generation variety, control, and interpretability.
chess language modeling, mixture of experts, reinforcement learning, behavioral stylometry
null
https://anonymous.4open.science/r/mixture-of-masters
https://arxiv.org/abs/2602.04447
null
2026
Giacomo Frisoni and Lorenzo Molfetta and Davide Freddi and Gianluca Moro
Mixture of Masters: Sparse Chess Language Models with Player Routing
misc
frisoni:2026:mixture-masters-sparse-chess-language-models-player-routing
null
null
null
null
null
null
null
null
null
Submitted to ICLR 2026 https://openreview.net/forum?id=lnIlH0hfek
null
null
null
null
null
null
null
null
2602.04447
null
null
null
null
null
Sparse Chess Language Models with Player Routing
mixture-of-masters
null
null
null
null
null
null
null
null
cs.LG
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Chess enhances problem-solving and decisionmaking skills. Traditional chess often features static difficulty, which can frustrate novices or bore masters. Dynamic Difficulty Adaptation (DDA) addresses this by adjusting the game's challenge based on player performance, enabling personalized learning and engagement. DDA relies on game analytics by collecting and interpreting performance data such as move timing, accuracy, and positional evaluations compared to top engines. In chess, the structured gameplay and availability of engines like Stockfish offer an ideal environment for DDA. By integrating game analytics, DDA can apply adjustments responsive to the player's current ability rather than relying on fixed difficulty presets. This approach tailors the gaming experience to individual skill levels, preventing novice frustration and keeping expert players challenged. Thus, game analytics enhances DDA in chess, creating an adaptive environment that caters to a broad spectrum of players.
Accuracy;Games;Machine learning;Ubiquitous computing;Timing;Problem-solving;Engines;Game Analytics;Dynamic Difficulty Adaption;Chess;Machine Learning
null
null
null
2025 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC)
2025
Gamal, Mohamed and Aboulhassan, Amal and Hassan, Yomna M.I.
Machine Learning Based Dynamic Difficulty Adaptation for Chess
inproceedings
gamal:2025:machine-learning-based-dynamic-difficulty-adaptation-chess
null
null
null
10.1109/MIUCC66482.2025.11196851
9--14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The presence or absence of winner-loser effects is a widely discussed phenomenon across both sports and psychology research. Investigation of such effects is often hampered by the limited availability of data. Online chess has exploded in popularity in recent years and provides vast amounts of data which can be used to explore this question. With a hierarchical Bayesian regression model, we carefully investigate the presence of such experiential effects in online chess. Using a large quantity of online chess data, we see little evidence for experiential effects that are consistent across all players, with some individual players showing some evidence for such effects. Given the challenging temporal nature of this data, we discuss several methods for assessing the suitability of our model and carefully check its validity.
Winner-loser Effects, Chess, Hierarchical Bayesian Modeling, online competitions
null
https://github.com/OwenWard/Chess_Winner
https://doi.org/10.1515/jqas-2025-0035
null
2025
Adam Gee and Sydney O. Seese and James P. Curley and Owen G. Ward
Investigating experiential effects in online chess using a hierarchical Bayesian analysis
article
gee:2025:investigating-experiential-effects-online-chess-hierarchical-bayesian-analysis
null
null
https://www.degruyterbrill.com/document/doi/10.1515/jqas-2025-0035/pdf
doi:10.1515/jqas-2025-0035
null
null
null
Journal of Quantitative Analysis in Sports
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
https://zenodo.org/records/17247312
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The aim of this research was to investigate the effects of transcranial direct current stimulation (tDCS) on risky decision-making in student chess players, taking into account their personality traits. In this study, 28 high school students who were active in chess and participated in provincial and national chess leagues were selected. Based on the NEO Five-Factor Inventory of personality traits, they were divided into two groups: 14 extroverted students (17 \pm{} 0.88) and 14 introverted students (16.5 \pm{} 1.02). Each participant attended three separate sessions in the laboratory, with a minimum 72-hour rest period between sessions. In each session, participants performed the Iowa Gambling Task and the Lichess computer game before any stimulation. They were then subjected to one of three conditions: right anodal/left cathodal, right cathodal/left anodal, or sham stimulation, for 20 minutes at 2 mA intensity over the dorsolateral prefrontal cortex. After the stimulation, participants repeated the Iowa Gambling Task and the Lichess computer game. Data analysis using two-way mixed ANOVA revealed a significant difference in the Iowa Gambling Task between right anodal/left cathodal and right cathodal/left anodal stimulation based on personality traits (p = 0.001). The findings of this study indicated that transcranial direct current stimulation had a differential effect on decision-making in chess players based on their personality traits. Specifically, the study's results showed that extroverted players exhibited more risk-taking behaviour, while introverted players acted more cautiously.
Extroverted,Brain stimulation,Risky Decision-making,Introverted,Chess
null
null
https://spsyj.ssrc.ac.ir/article_4394_55f2b660d8300f25206eb52a483e0bb4.pdf
null
2025
Ghayebzadeh, Shahrouz and Moharramzadeh, Mehrdad and Zoghi, Maryam
The Effect of Transcranial Direct Current Stimulation on Risky Decision-Making of Student Chess Players Based on their Introverted and Extroverted Personality Traits
article
ghayebzadeh:2025:effect-transcranial-current-decision-making-chess-personality
null
null
null
10.22089/spsyj.2025.17731.2546
null
null
null
Sport Psychology Studies
null
null
null
null
null
Sport Sciences Research institute
null
null
null
null
null
null
2345-2978
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2538-1504
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Puzzle-solving has long served as a benchmark for evaluating artificial intelligence, testing a model's ability to reason, infer, and strategize across complex problem spaces. Traditional AI and machine learning methods, such as symbolic reasoning and reinforcement learning, have made notable strides in structured domains like board games and logic puzzles. However, as neural networks and, more recently, large language models (LLMs) have evolved, new possibilities have emerged for tackling a broader range of puzzle types, including those requiring nuanced commonsense reasoning, abstract pattern recognition, and complex multi-step calculations. LLMs, with their vast data-driven language capabilities, hold unique potential to bridge structured logical tasks and less formal, knowledge-based puzzles. Despite these advances, the current landscape of puzzle-solving with LLMs reveals both achievements and limitations, particularly when models are tasked with problems that demand interpretative reasoning and precise calculation. This thesis explores the evolving role of LLMs in solving such complex reasoning tasks, specifically focusing on their puzzle-solving capabilities. Divided into two main sections, the thesis first provides a comprehensive survey of recent advancements in LLM methodologies, covering diverse prompting techniques, neuro-symbolic approaches, and fine-tuning strategies for puzzles. Using a newly proposed taxonomy, puzzles are categorized into rule-based and rule-less types, with each category examined for its unique cognitive demands on LLMs. The second section presents experimental evaluations conducted on four datasets--two math-based datasets (GSM8K, SVAMP) and two puzzle-focused datasets (Game of 24 and RiddleSense). Various reasoning techniques, including Input-Output (IO) prompting, Chain-of-Thought (CoT), Least-to-Most (LtM), and Faithful-CoT methods, are employed to assess LLM performance. Models of varying scales, particularly smaller LLMs like Llama-3.1 family and Mistral, are tested across settings such as zero-shot, few-shot, and self-consistency to evaluate their efficacy in solving complex and multi-step reasoning tasks. The thesis provides critical insights into the performance limitations of current LLMs in puzzle-solving, particularly noting that advanced reasoning methods like Faithful-CoT and puzzle translation techniques yield inconsistent improvements with smaller models. Finally, it outlines future research directions, advocating for expanded dataset creation, neuro-symbolic integration, and advancements in puzzle generation. This thesis aims to deepen our understanding of LLMs' reasoning abilities and highlight pathways to enhance their performance in complex cognitive tasks.
Large Language Models; Reasoning; Puzzle Solving; Prompting; Neurosymbolic Methods
null
null
https://dspace.lib.ntua.gr/xmlui/handle/123456789/61469
null
2025
Giadikiaroglou, Panagiotis
Investigating the capabilities of language models in puzzle reasoning: A survey and experimental analysis
thesis
giadikiaroglou:2025:investgiating-capabilities-language-models-puzzles
Bachelor's thesis
Giorgos Stamou
null
http://dx.doi.org/10.26240/heal.ntua.29165
null
null
null
null
March
null
null
null
null
National Technological University of Athens
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
We extend the Stainless deductive verifier with floating-point support, providing the first automated verification support for floating-point numbers for a subset of Scala that includes polymorphism, recursion and higher-order functions. We follow the recent approach in the KeY verifier to axiomatise reasoning about mathematical functions, but go further by supporting all functions from Scala's math API, and by verifying the correctness of the axioms against the actual implementation in Stainless itself. We validate Stainless' floating-point support on a new set of benchmarks sampled from real-world code from GitHub, showing that it can verify specifications about, e.g., ranges of output or absence of special values for most supported functions, or produce counter-examples when the specifications do not hold.
null
null
null
https://arxiv.org/abs/2601.14059
null
2026
Andrea Gilot and Axel Bergstr\"{o}m and Eva Darulova
Verifying Floating-Point Programs in Stainless
misc
gilot:2026:verifying-floating-point-programs-stainless
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2601.14059
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
cs.PL
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Chess engines have played a fundamental role in the advancement of articial intelligence applied to the game since the mid-20th century. Today, Stocksh, the most powerful and open source chess engine, still relies on alpha-beta pruning, but also incorporates machine learning techniques. The goal of this project is to develop a chess engine capable of competing againstboth other engines and human players, using minimax with alpha-beta pruning as its core. Additionally, we analyze the impact of other classical algorithmic techniquessuch as transposition tables, iterative deepening, and a move generator based on magic bitboards. The chess engine has been uploaded to the Lichess platform, where AlphaDeepChess achieved an Elo rating of 1900 while running on a Raspberry Pi 5 equipped with a 2GB transposition table.
Articial Intelligence, Chess Engine, Alpha-beta pruning, Iterative deepening, Quiescence search, Move ordering, Transposition table, Zobrist hashing, Magic bitboards
null
https://github.com/LauraWangQiu/AlphaDeepChess
https://hdl.handle.net/20.500.14352/123857
null
2025
Gir{\'o}n Herranz, Juan and Wang Qiu, Yi
AlphaDeepChess: chess engine based on alpha-beta pruning
thesis
giron:2025:alpha-deep-chess-chess-engine-alpha-beta-pruning
Grado en Ingenier\'{\i}a de Computadores y Grado en Desarrollo de Videojuegos
F\'{a}bregas Alfaro, Ignacio and Rubio Cu\'{e}llar, Rub\'{e}n Rafael
null
20.500.14352/123857
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Universidad Complutense de Madrid
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Language Models have not acquired their popularity based only on their text-generation capabilities, but also for the ability of learning they do have. An exploration of these capabilities over chess is carried out. With chess, it allows to process the game as a Natural Language problem. Analysing its capabilities of reasoning and solving puzzles with different prompt approaches, introducing concepts of in-context learning and fine-tuning.
GPT-4, GPT-3, Modelos ling\"{u}\'{\i}sticos GPT, Modelos ling\"{u}\'{\i}sticos (LM), Machine Learning (ML), Artificial Intelligence (AI), GPT Language models, Language models (LMs)
null
null
https://riunet.upv.es/handle/10251/197801
null
2023
Albert Gramaje, Borja
Exploring GPT's Capabilities in Chess-Puzzles
thesis
gramaje:2023:exploring-gpt-capabilities-chess-puzzles
mathesis
Ferri Ram\'{i}rez, C\'{e}sar
https://riunet.upv.es/server/api/core/bitstreams/8dc80122-7f5c-40d2-9844-cd40d5a943d7/content
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Valencia, Spain
null
null
null
null
null
null
null
null
null
Universitat Polit\`{e}cnica de Val\`{e}ncia
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
In this paper we present results from recent experiments that suggest that chess players associate emotions to game situations and reactively use these associations to guide search for planning and problem solving. We report on a pilot experiment with multi-modal observation of human experts engaged in solving challenging problems in Chess. Our results confirm that cognitive processes have observable correlates in displays of emotion and fixation, and that these displays can be used to evaluate models of cognitive processes. They also revealed an unexpected observation of rapid changes in emotion as players attempt to solve challenging problems. In this paper, we propose a cognitive model to explain our observations, and describe initial results from a second experiment designed to test this model.
chess, chunking, cognitive models, multimodal observation of gaze and emotion, situation models, working memory
null
null
https://doi.org/10.1145/3279810.3279846
Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data
2018
Guntz, Thomas and Crowley, James L. and Vaufreydaz, Dominique and Balzarini, Raffaella and Dessus, Philippe
The role of emotion in problem solving: first results from observing chess
inproceedings
guntz:2018:role-emotion-problem-solving-first-results-observing-chess
null
null
https://dl.acm.org/doi/pdf/10.1145/3279810.3279846
10.1145/3279810.3279846
null
null
null
null
null
null
null
null
MCPMD '18
Association for Computing Machinery
8
12
null
New York, NY, USA
null
null
null
null
9781450360722
Boulder, Colorado
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This paper attempts to generate point values for chess pieces, as alternatives to the commonly accepted chess piece values. We use a database of over a million online chess games to heuristically determine the value of a chess piece, by using material imbalances to predict game results. We then explore how piece values change when we analyze material imbalances at various stages of a chess game. As further exploration, we determine what practical values chess pieces and imbalances have at various rating ranges. This creates practical data that players of varying rating can use to aid in chess calculation, as opposed to the rigid values that are typically accepted.\</p\>
null
null
null
https://www.jsr.org/hs/index.php/path/article/view/4356
null
2023
Gupta, Aditya and Grattoni, Christopher and Gupta, Arnav
Determining Chess Piece Values Using Machine Learning
article
gupta:2023:determining-chess-piece-values-machine-learning
null
null
null
10.47611/jsrhs.v12i1.4356
null
1
12
Journal of Student Research
February
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Houston, USA
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
With large chess-playing neural network models like AlphaZero contesting the state of the art within the world of computerised chess, two challenges present themselves: the question of how to explain the domain knowledge internalised by such models, and the problem that such models are not made openly available. This work presents the re-implementation of the concept detection methodology applied to AlphaZero, by using large, open-source chess models with comparable performance. We obtain results similar to those achieved when applying this methodology to AlphaZero, while relying solely on open-source resources. We also present a novel explainable AI (XAI) method, which is guaranteed to highlight exhaustively and exclusively the information used by the explained model. This method generates visual explanations tailored to domains characterised by discrete input spaces, as is the case for chess. Our presented method has the desirable property of controlling the information flow between any input vector and the given model, which in turn provides strict guarantees regarding what information is used by the trained model during inference. We demonstrate the viability of our method by applying it to standard {\$}{\$}8 {\backslash}times 8{\$}{\$}chess, using large open-source chess models.
null
null
https://github.com/patrik-ha/ii-map
https://doi.org/10.1038/s41598-024-70701-2
null
2024
Hammersborg, Patrik and Str{\"u}mke, Inga
Information based explanation methods for deep learning agents---with applications on large open-source chess models
article
hammersborg:2024:information-based-explanation-methods-deep-learning-agents-applications-large-open-source-chess-models
null
null
null
10.1038/s41598-024-70701-2
20174
1
14
Scientific Reports
August
null
null
https://patrik-ha.github.io/ii-map/
null
null
null
null
null
null
null
null
2045-2322
null
null
null
null
null
null
null
null
null
30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
In games like chess, strategy evolves dramatically across distinct phases - the opening, middlegame, and endgame each demand different forms of reasoning and decision-making. Yet, many modern chess engines rely on a single neural network to play the entire game uniformly, often missing opportunities to specialize. In this work, we introduce M2CTS, a modular framework that combines Mixture of Experts with Monte Carlo Tree Search to adapt strategy dynamically based on game phase. We explore three different methods for training the neural networks: Separated Learning, Staged Learning, and Weighted Learning. By routing decisions through specialized neural networks trained for each phase, M2CTS improves both computational efficiency and playing strength. In experiments on chess, M2CTS achieves up to +122 Elo over standard single-model baselines and shows promising generalization to multi-agent domains such as Pommerman. These results highlight how modular, phase-aware systems can better align with the structured nature of games and move us closer to human-like behavior in dividing a problem into many smaller units.
null
null
https://github.com/QueensGambit/CrazyAra
https://arxiv.org/abs/2401.16852
null
2025
Felix Helfenstein and Johannes Czech and Jannis Bl\"{u}ml and Max Eisel and Kristian Kersting
Checkmating One, by Using Many: Combining Mixture of Experts with MCTS to Improve in Chess
misc
helfenstein:2025:checkmating-one-using-many-combining-mixture-experts-mcts-improve-chess
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2401.16852
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
cs.LG
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Humans are social beings, and most of our decisions are influenced by considerations of how others will respond. Whether in poker or political negotiations, the riskiness of a decision is often determined by the variance of the other party's possible responses. Such socially-contingent decisions can be framed in terms of adversarial games, which differ from other risky situations such as lotteries because the risk arises from uncertainty about the opponent's decisions, and not some independent stochasticity in the world. We use chess as a lens through which we can study human risk-taking behavior in adversarial decision making. We develop a novel algorithm for calculating the riskiness of each move in a chess game, and apply it to data from over 1 billion online chess games. We find that players not only exhibit state-dependent risk preferences, but also change their risk-taking strategy depending on their opponent, and that this effect differs in experts and novices.
risk taking; adversarial games; chess
null
https://github.com/choldawa/Chess
https://escholarship.org/uc/item/403764rd
Proceedings of the 43rd Annual Meeting of the Cognitive Science Society, CogSci 2021, virtual, July 26-29, 2021
2021
Cameron Holdaway and Ed Vul
Risk-taking in adversarial games: What can 1 billion online chess games tell us?
inproceedings
holdaway:2021:risk-taking-adversarial-games-what-billion-chess-games-tell-us
null
null
null
null
null
null
null
null
null
null
W. Tecumseh Fitch and Claus Lamm and Helmut Leder and Kristin Te{\ss}mar{-}Raible
null
null
cognitivesciencesociety.org
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
https://www.linkedin.com/pulse/what-can-1-billion-chess-games-tell-us-risk-taking-cameron-holdaway/
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Chess is a strategy board game with its inception dating back to the 15th century. The Covid-19 pandemic has led to a chess boom online with 95,853,038 chess games being played during January, 2021 on lichess.com. Along with the chess boom, instances of cheating have also become more rampant. Classifications have been used for anomaly detection in different fields and thus it is a natural idea to develop classifiers to detect cheating in chess. However, there are no specific examples of this, and it is difficult to obtain data where cheating has occurred. So, in this paper, we develop 4 machine learning classifiers, Linear Discriminant Analysis, Quadratic Discriminant Analysis, Multinomial Logistic Regression, and K-Nearest Neighbour classifiers to predict chess game results and explore predictors that produce the best accuracy performance. We use Confusion Matrix, K Fold Cross-Validation, and Leave-One-Out Cross-Validation methods to find the accuracy metrics. There are three phases of analysis. In phase I, we train classifiers using 1.94 million over the board game as training data and 20 thousand online games as testing data and obtain accuracy metrics. In phase II, we select a smaller pool of 212 games, select additional predictor variables from chess engine evaluation of the moves played in those games and check whether the inclusion of the variables improve performance. Finally, in phase III, we investigate for patterns in misclassified cases to define anomalies. From phase I, the models are not performing at a utilizable level of accuracy (44-63\%). For all classifiers, it is no better than deciding the class with a coin toss. K-Nearest Neighbour with K = 7 was the best model. In phase II, adding the new predictors improved the performance of all the classifiers significantly across all validation methods. In fact, using only significant variables as predictors produced highly accurate classifiers. Finally, from phase III, we could not find any patterns or significant differences between the predictors for both correct classifications and misclassifications. In conclusion, machine learning classification is only one useful tool to spot instances that indicates anomalies. However, we cannot simply judge anomalous games using only this method.
null
null
null
null
null
2021
Hoque, Masudul
Classification of Chess Games: An Exploration of Classifiers for Anomaly Detection in Chess
thesis
hoque:2022:classification-chess-games-exploration-classifiers-anomaly-detection-chess
mathesis
Premarathna, Galkande Iresha
https://cornerstone.lib.mnsu.edu/cgi/viewcontent.cgi?article=2118&context=etds
null
null
null
null
null
null
https://cornerstone.lib.mnsu.edu/etds/1119/
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Minnesota State University, Mankato
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This paper introduces ChessLM, a novel Transformer-based model designed to learn rich, contextual vector representations (embeddings) of chess positions. Moving beyond traditional chess engines focused on move evaluation, our approach is inspired by the success of self-supervised learning in Natural Language Processing. We adapt the Vision Transformer architecture and train the model using two self-supervised tasks on a large corpus of chess games: Masked Piece Prediction, which requires the model to infer masked pieces based on board context, and Move Difference Prediction, which involves predicting the number of moves between two positions from the same game. Qualitative analysis demonstrates the model's ability to capture high-level structural and thematic similarities between positions across different game phases (opening, middlegame, endgame), such as shared pawn structures, material imbalances, and king safety patterns. However, the analysis also reveals limitations that would significantly impact our model's ability to generate position evaluations. Despite this, the learned embeddings are shown to be effective for retrieving strategically or tactically similar positions, enabling applications such as intelligent chess puzzle suggestions. This work contributes a novel method for learning chess position representations that encode thematic information, opening avenues for future research in applying representation learning to various chess-related tasks beyond traditional evaluation.
Chess, Transformers, Machine Learning, Embeddings
https://huggingface.co/odestorm1/chesslm
https://github.com/bluehood/Encoder-ChessLM
https://bluehood.github.io/research/benh_Beyond_Evaluation__Learning_Contextual_Chess_Position_Representations_2025.pdf
null
2025
Ben Hull
Beyond Evaluation: Learning Contextual Chess Position Representations
misc
hull:2025:beyond-evaluation-learning-contextual-chess-position-representations
null
null
https://bluehood.github.io/research/benh_Beyond_Evaluation__Learning_Contextual_Chess_Position_Representations_2025.pdf
null
null
null
null
null
null
Technical report
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Accessed via \url{[https://bluehood.github.io/](https://bluehood.github.io/)}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This paper examines pedagogical approaches and instructional tools for teaching chess in higher education. Chess instruction in universities can serve disciplinary goals (e.g., sport sciences, cognitive psychology), cross-curricular goals (critical thinking, problem solving), and extra-curricular objectives (wellness, student engagement). Drawing on theoretical frameworks from constructivist and experiential learning, and on empirical literature about cognitive and educational effects of chess training, the paper presents a structured course design, recommended teaching methods, practical activities, digital and physical tools, assessment strategies, and implementation considerations. The aim is to provide instructors and programme designers with an evidence-informed, practical roadmap to develop effective, measurable, and scalable chess courses or modules that align with higher-education learning outcomes.
chess education, higher education, pedagogy, constructivism, blended learning, assessment, chess engines,digital boards, transferable skills
null
null
https://egarp.lt/index.php/JPURM/article/view/460
null
2025
Huseynova, Kifayet and Novruzova, Aide
Methods and Tools for Teaching Chess in Higher Education
article
huseynova:2025:methods-tools-teaching-chess-higher-education
null
null
null
10.69760/portuni.0110018
176--186
10
1
Porta Universorum
December
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2025
Dongyoon Hwang and Hojoon Lee and Jaegul Choo and Dongmin Park and Jongho Park
Can Large Language Models Develop Strategic Reasoning? Post-training Insights from Learning Chess
article
hwang:2025:can-large-language-models-develop-strategic-reasoning-post-training-insights-learning-chess
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This study addresses the challenge of distinguishing between human and computer-generated play in chess, crucial for ensuring the integrity and fairness of both onlineand tournament play. As unauthorized computer assistance becomes increasinglysophisticated, we utilize sequential neural networks to analyze a vast dataset of chessgames, employing both traditional engines, such as Stockfish and Leela, andinnovative neural networks like Maia and its individual sub-models. This analysisincorporates centipawn deviation metrics to gauge departures from typicalcomputer strategies, Maia's insights into human and idiosyncratic playstyles, and anevaluation of time distribution for moves. Our method extends by considering thestrategic implications of move sequences and the consistency of play under varyinggame conditions, enhancing our understanding of the nuanced differences betweenhuman and AI play. Remarkably, our algorithm achieves approximately 98\% accuracyin identifying the use of chess engines, offering a significant advancement in effortsto maintain the game's integrity.To further validate our findings, we conducted cross-validation with a separatedataset, confirming the robustness of our model. We also explored the algorithm'sapplicability to detecting AI assistance in other board games, suggesting its potentialfor broader use. The research highlights the critical role of machine learning incombating digital cheating, emphasizing the need for continuous adaptation ofdetection methods to keep pace with evolving technologies. Additionally, ourfindings point to the importance of developing ethical guidelines for the use of AI ingames, ensuring a fair and level playing field for all participants. Lastly, bypublishing our methodology and the criteria for AI detection, we aim to foster anopen dialogue within the gaming community and among developers, promotingtransparency and collaboration in the fight against cheating.
Chess, Cheating, AI, Neural Networks
null
null
https://ceur-ws.org/Vol-3885/paper13.pdf
Proceedings of 29th International Conference Information Society and University Studies
2024
Iavich, Maksim and Kevanishvili, Zura
Detecting Fair Play Violations in Chess Using Neural Networks
inproceedings
iavich:2024:detecting-fair-play-violations-chess-neural-networks
null
null
null
null
121--127
null
3341
null
null
null
null
null
{CEUR} Workshop Proceedings
CEUR-WS.org
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
https://www.youtube.com/watch?v=hJ7POry_q6U
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
ChessFormer introduces a novel searchless chess engine leveraging transformer architecture to approximate human decision-making in chess. Trained on a vast dataset of 3 billion chess positions, our model learns its entire decision-making process directly from training data. Evaluations show an improvement in human move-matching accuracy over prior models in high-Elo ranges and the model's ability to distinguish between human and algorithmic decision-making, offering potential applications in chess analysis or cheat detection.
null
null
null
null
Modeling Decisions for Artificial Intelligence
2026
Zeman, Jakub and {\v{C}}epek, Miroslav
ChessFormer - Modeling Human Decision Making in Chess
inproceedings
jakub:2026:chessformer-modeling-human-decision-making-chess
null
null
null
null
42--53
null
null
null
null
null
Torra, Vicen{\c{c}} and Narukawa, Yasuo and Domingo-Ferrer, Josep
null
null
Springer Nature Switzerland
null
null
null
Cham
null
null
null
null
978-3-032-00891-6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Do neural networks learn to implement algorithms such as look-ahead or search "in the wild"? Or do they rely purely on collections of simple heuristics? We present evidence of learned look-ahead in the policy and value network of Leela Chess Zero, the currently strongest deep neural chess engine. We find that Leela internally represents future optimal moves and that these representations are crucial for its final output in certain board states. Concretely, we exploit the fact that Leela is a transformer that treats every chessboard square like a token in language models, and give three lines of evidence: (1) activations on certain squares of future moves are unusually important causally; (2) we find attention heads that move important information "forward and backward in time," e.g., from squares of future moves to squares of earlier ones; and (3) we train a simple probe that can predict the optimal move 2 turns ahead with 92\% accuracy (in board states where Leela finds a single best line). These findings are clear evidence of learned look-ahead in neural networks and might be a step towards a better understanding of their capabilities.
null
null
https://github.com/HumanCompatibleAI/leela-interp
https://proceedings.neurips.cc/paper_files/paper/2024/file/37d9f19150fce07bced2a81fc87d47a6-Paper-Conference.pdf
Advances in Neural Information Processing Systems
2024
Jenner, Erik and Kapur, Shreyas and Georgiev, Vasil and Allen, Cameron and Emmons, Scott and Russell, Stuart
Evidence of Learned Look-Ahead in a Chess-Playing Neural Network
inproceedings
jenner:2024:evidence-lookahead-chess-neural-network
null
null
https://proceedings.neurips.cc/paper_files/paper/2024/file/37d9f19150fce07bced2a81fc87d47a6-Paper-Conference.pdf
10.52202/079017-0987
31410--31437
null
37
null
null
null
A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang
https://leela-interp.github.io/
null
Curran Associates, Inc.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Although pretrained large language models (LLMs) can generate convincing natural language about games like chess, they lack positional and contextual knowledge and as such are poor game-playing agents. In this project, I utilize language pretaining; instruction fine-tuning, an additional training regimen with chess-specific tasks presented in natural language; and chain-of-thought prompting, a natural language description of problem reasoning prepended to the answer of a problem, to improve the performance of LLMs at chess move generation (validity/legality and quality of moves). I show that fine-tuned GPT-2-XL, a 1.5B parameter LLM, performs favorably well at move generation compared to ChatGPT with few-shot learning; I also validate the additional benefits of chain-of-thought prompting compared to plain prompts in ChatGPT while highlighting tradeoffs between the quality of natural language and the quality of chess when more verbose prompts are used in the smaller GPT-2-XL.
null
null
null
https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1234/final-reports/final-report-169466939.pdf
null
2023
Bowen Jiang
Building a Natural Language Chess Engine with Pretraining and Instruction Fine-Tuning
misc
jiang:2023:building-natural-language-chess-engine-pretraining-instruction-finetunine
null
null
null
null
null
null
null
null
null
Stanford CS224N Custom Project, Winter 2023 (https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1234/project.html)
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
We consider supervised learning (regression/classification) problems with tensor-valued input. We derive multi-linear sufficient reductions for the regression or classification problem by modeling the conditional distribution of the predictors given the response as a member of the quadratic exponential family. We develop estimation procedures of sufficient reductions for both continuous and binary tensor-valued predictors. We prove the consistency and asymptotic normality of the estimated sufficient reduction using manifold theory. For continuous predictors, the estimation algorithm is highly computationally efficient and is also applicable to situations where the dimension of the reduction exceeds the sample size. We demonstrate the superior performance of our approach in simulations and real-world data examples for both continuous and binary tensor-valued predictors.
null
null
null
https://arxiv.org/abs/2502.20216
null
2025
Daniel Kapla and Efstathia Bura
Generalized Multi-Linear Models for Sufficient Dimension Reduction on Tensor Valued Predictors
misc
kapla:2025:generalized-multi-linear-models-dimension-reduction-tensor-valued-predictors
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2502.20216
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
stat.ME
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
In this paper, we address the problem of finding similar chess puzzles to a given query puzzle using a dataset of one million puzzles. We approach this problem through an information retrieval (IR) perspective. Chess positions can be compared in mainly two aspects, positional similarity and dynamic similarity. We experimented by comparing chess positions solely based on the string representation. However, to capture static and dynamic features of a position, we create an index of features for the chess puzzles which include static and dynamic information regarding the best move for the given puzzles. The features of the query chess puzzle can be compared to the puzzles in the dataset to find the most similar chess puzzle. To optimize the search, the index features can be stored in a separate database to avoid repetitive calculations of mobility and similarity between FENs. Our approach incorporates algorithms such as cosine similarity, Levenshtein distance, and feature encoding to enhance the accuracy and efficiency of puzzle retrieval.
null
null
null
null
Computer Science Engineering: Proceedings of the 1st International Conference on Computing and Intelligent Information Systems (ICCIIS 2024), Bangalore, India, 19-20th April, 2024 Volume 1
2024
Karn, Aryan and Biradar, Chinmay Anil and Puranik, Aryan and Kireeti, Attili Krishna and Jayashree, R
Personalized recommendation of chess puzzles
inproceedings
karn:2024:personalized-recommendation-chess-puzzles
null
null
null
null
29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
CRC Press
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The increasing demand for robust and scalable encryption algorithms is driven by rapid advancements in computing technology, and emerging technologies like quantum cryptanalysis pose significant threats to information security. This paper presents a novel trinary-based multistage encryption algorithm designed for scalability and enhanced security through the use of the less-explored trinary number system. By leveraging a dynamically adapting, pseudo-random, multi-dimensional key space and high diffusion transform, the algorithm introduces significant resistance to common attacks. Additionally, two innovative generative steganography schemes are proposed that algorithmically generate unique musical compositions or chess positions directly from the ciphertext. The overall algorithm operates at linear complexity, which ensures optimal performance. To contextualize practicality, the system is benchmarked against established lightweight cryptographic baselines, and a dataset of 1000 plaintext–ciphertext pairs has been publicly released to support independent analysis. Ultimately, this work presents a complete and flexible framework that uniquely combines a novel cipher with generative steganography, offering a robust and efficient system.
Encryption;Steganography;Security;Tensors;Table lookup;Encoding;Music;Heuristic algorithms;Ciphers;Transforms;Cryptography;Encryption;Generative Steganography;Transforms;Trinary Encoding
null
null
null
null
2026
Kaushal Karthik, K M and Ramesh, R
GenSTEG: A Light and Scalable Trinary-Based Encryption with Multimodal Generative Steganography
article
karthik:2026:gensteg-light-scalable-trinary-based-encryption-multiomodal-generative-steganography
null
null
null
10.1109/ACCESS.2026.3665790
null
null
null
IEEE Access
null
null
null
null
null
null
null
null
null
null
null
null
2169-3536
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Language models have shown unprecedented capabilities, sparking debate over the source of their performance. Is it merely the outcome of learning syntactic patterns and surface level statistics, or do they extract semantics and a world model from the text? Prior work by Li et al. investigated this by training a GPT model on synthetic, randomly generated Othello games and found that the model learned an internal representation of the board state. We extend this work into the more complex domain of chess, training on real games and investigating our model's internal representations using linear probes and contrastive activations. The model is given no a priori knowledge of the game and is solely trained on next character prediction, yet we find evidence of internal representations of board state. We validate these internal representations by using them to make interventions on the model's activations and edit its internal board state. Unlike Li et al's prior synthetic dataset approach, our analysis finds that the model also learns to estimate latent variables like player skill to better predict the next character. We derive a player skill vector and add it to the model, improving the model's win rate by up to 2.6 times.
GPT, large language model, interpretability, world model
https://huggingface.co/adamkarvonen/chess_llms
https://github.com/adamkarvonen/chess_llm_interpretability
https://openreview.net/forum?id=PPTrmvEnpW
First Conference on Language Modeling
2024
Adam Karvonen
Emergent World Models and Latent Variable Estimation in Chess-Playing Language Models
inproceedings
karvonen:2024:emergent-world-models-latent-variable-estimation-chess-playing
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
We train a GPT model from scratch to play chess and find that it learns to compute board state and estimate player Elo. We use these representations to edit the GPT's internal board state and increase or decrease its chess-playing ability.
null
null
null
null
null
null
null
https://huggingface.co/datasets/adamkarvonen/chess_games
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
What latent features are encoded in language model (LM) representations? Recent work on training sparse autoencoders (SAEs) to disentangle interpretable features in LM representations has shown significant promise. However, evaluating the quality of these SAEs is difficult because we lack a ground-truth collection of interpretable features which we expect good SAEs to identify. We thus propose to measure progress in interpretable dictionary learning by working in the setting of LMs trained on Chess and Othello transcripts. These settings carry natural collections of interpretable features--for example, ``there is a knight on F3''--which we leverage into metrics for SAE quality. To guide progress in interpretable dictionary learning, we introduce a new SAE training technique, -annealing, which demonstrates improved performance on our metric.
Language models, interpretability, dictionary learning
null
https://github.com/adamkarvonen/SAE_BoardGameEval
https://proceedings.neurips.cc/paper_files/paper/2024/file/9736acf007760cc2b47948ae3cf06274-Paper-Conference.pdf
Advances in Neural Information Processing Systems
2024
Karvonen, Adam and Wright, Benjamin and Rager, Can and Angell, Rico and Brinkmann, Jannik and Smith, Logan and Mayrink Verdun, Claudio and Bau, David and Marks, Samuel
Measuring Progress in Dictionary Learning for Language Model Interpretability with Board Game Models
inproceedings
karvonen:2024:measuring-progress-dictionary-learning-language-model-interpretability-board-games-models
null
null
null
10.52202/079017-2644
83091--83118
null
37
null
null
An older version of this paper was previously published at the ICML 2024 Workshop on Mechanistic Interpretability: https://openreview.net/forum?id=qzsDKwGJyB
A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang
https://nips.cc/virtual/2024/poster/95121
null
Curran Associates, Inc.
null
null
null
null
null
null
null
null
null
null
We measure progress in training sparse autoencoders for LM interpretability by working in the setting of LMs trained on chess and Othello.
null
https://nips.cc/media/neurips-2024/Slides/95121_AxaqEUR.pdf
https://nips.cc/media/Po…30259153.6686015
null
null
null
null
null
null
null
null
null
null
null
null
null
https://slideslive.com/39025524/measuring-progress-in-dictionary-learning-for-language-model-interpretability-with-board-game-models
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Android and iOS are the two dominant mobile operating systems in the rapidly expanding smartphone market, serving billions of users worldwide. Both platforms feature extensive app stores with millions of applications available for download. While security measures are in place to prevent the distribution of malicious or vulnerable apps, instances of malware have still been discovered in both stores. These incidents highlight a significant security risk that threatens user privacy and highlights the urgent need for advanced research in detecting malicious code and security vulnerabilities in mobile applications. As a step toward addressing this challenge, this thesis explores the feasibility of using binary similarity detection techniques to identify similarities between Android and iOS applications. To achieve this, we designed and implemented a novel analysis pipeline specifically tailored for comparing Android apps (compiled into binary OAT files) with iOS binaries. This pipeline enables the automatic identification of matches between corresponding applications across both platforms. The pipeline comprises several key stages, including preparing the apps and their third-party libraries for binary analysis, disassembling the binaries using both IDA Pro and Ghidra to account for potential variations introduced by different disassemblers, and conducting similarity analysis on the disassembly results. To assess the effectiveness of our pipeline, we conducted a comprehensive analysis on a dataset of 100 cross-platform apps. Our findings indicate that current binary similarity analysis methods have limitations in directly identifying cross-platform similarities between applications. However, we demonstrated that incorporating third-party libraries into the analysis significantly enhances similarity detection and can help to provide meaningful insights. This highlights the crucial role that third-party libraries play in cross-platform app analysis. Additionally, we found that the choice of disassembler has a significant impact on analysis results. Notably, no single disassembler proved to be clearly superior, as both exhibited their own strengths and limitations. Ultimately, this study offers valuable insights into the challenges and potential of cross-platform binary app analysis using traditional binary diffing techniques, laying a strong foundation for future research in this evolving field.
Mobile Security; Android; iOS; Binary Analysis; Cross-Platform Analysis
null
null
http://hdl.handle.net/20.500.12708/217584
null
2025
Keusch, Alexander
Binary Matching of Android and iOS Apps
thesis
keusch:2025:binary-matching-android-ios-apps
Diploma Thesis
Lindorfer, Martina and Bleier, Jakob
null
10.34726/hss.2025.128603
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Technische Universit\"{a}t Wien
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The rapid growth of online chess has intensified the challenge of distinguishing engine-assisted from authentic human play, exposing the limitations of existing approaches that rely solely on deterministic evaluation metrics. This study introduces a proof-of-concept hybrid framework for discriminating between engine-like and human-like chess play patterns, integrating Stockfish's deterministic evaluations with stylometric behavioral features derived from the Maia engine. Key metrics include Centipawn Loss (CPL), Mismatch Move Match Probability (MMMP), and a novel Curvature-Based Stability (\ensuremath{\Delta}S) indicator. These features were incorporated into a convolutional neural network (CNN) classifier and evaluated on a controlled benchmark dataset of 1000 games, where "suspicious" gameplay was algorithmically generated to simulate engine-optimal patterns, while "clean" play was modeled using Maia's human-like predictions. Results demonstrate the framework's ability to discriminate between these behavioral archetypes, with the hybrid model achieving a macro F1-score of 0.93, significantly outperforming the Stockfish-only baseline (F1 = 0.87), as validated by McNemar's test (p = 0.0153). Feature ablation confirmed that Maia-derived features reduced false negatives and improved recall, while \ensuremath{\Delta}S enhanced robustness. This work establishes a methodological foundation for behavioral pattern discrimination in chess, demonstrating the value of combining deterministic and human-centric modeling. Beyond chess, the approach offers a template for behavioral anomaly analysis in cybersecurity, education, and other decision-based domains, with real-world validation on adjudicated misconduct cases identified as the essential next step.
null
null
null
https://www.mdpi.com/2571-5577/9/1/11
null
2026
Kevanishvili, Zura and Iavich, Maksim
A Hybrid Human-Centric Framework for Discriminating Engine-like from Human-like Chess Play: A Proof-of-Concept Study
article
Kevanishvili:2026:hybrid-human-centric-framework-discriminating-engine-like-human-like-chess-play-proof-concept-study
null
null
null
10.3390/asi9010011
null
1
9
Applied System Innovation
null
null
null
null
null
null
null
null
null
null
null
null
2571-5577
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
online chess; gameplay integrity analysis; hybrid system design; stylometric modeling; centipawn loss; move match probability; convolutional neural networks; explainable AI; human-AI interaction; applied system innovation
11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The growing adoption of the Internet of Things (IoT) highlights the need for intuitive, accessible, and screenless modes of interaction. Voice interfaces, combining speech-to-text (STT) and text-to-speech (TTS) processing, provide a natural mechanism for controlling IoT systems while enabling inclusive user experiences. This paper proposes a modular architecture for voice-controlled IoT applications, designed for reusability across domains with minimal adaptation. The architecture integrates embedded hardware, cloud-based speech services, and real-time feedback mechanisms, forming a flexible pipeline for voice- driven interactions. To demonstrate feasibility, the architecture is implemented in the context of online blindfold chess through integration with the Lichess API. The system supports realtime gameplay using only spoken input and audio feedback, providing an accessible interface for visually impaired users and a valuable tool for blindfold chess training. Evaluation of the prototype indicates an STT accuracy of 85\% after optimizations, average move execution delay of 3.5 seconds, and end-to-end round-trip latency below 8 seconds. These results validate the practicality of the design and position it as a reusable template for robust, voice-driven IoT applications.
Training;Pipelines;Prototypes;Process control;Computer architecture;Real-time systems;Text to speech;Systems support;Internet of Things;Speech to text;Internet of Things (IoT);voice interface;speech-to-text (STT);text-to-speech (TTS);embedded systems;accessibility;blindfold chess;Lichess API
null
null
null
2025 IEEE 17th International Conference on Computational Intelligence and Communication Networks (CICN)
2025
Khamele, Ojas and Lambe, Amruta and Pawar, Praveen
A Modular Voice-Controlled IoT Architecture for Screenless Real-Time Interaction
inproceedings
khamele:2025:modular-voice-controlled-iot-architecture-screenless-real-time-interaction
null
null
null
10.1109/CICN67655.2025.11367883
1686--1690
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The 2024 Chessable Research Awards had five student winners, including Alex Knopps, the author of this guest blog post. Knopps explores whether solving chess puzzles alone or with a partner leads to better outcomes. His research also accounted for the difficulty of puzzles. The results indicate that there wasn't much difference between the number of correctly solved chess puzzles in both individual and collaborative settings. However, working in a group led to fewer errors.
null
null
null
https://www.chessable.com/blog/collaborative-versus-individual-chess-puzzle-solving/
null
2025
Knopps, Alex
Collaborative versus Individual Chess Puzzle Solving
online
knopps:collaborative-vs-individual-chess-puzzle-solving
null
null
null
null
null
null
null
null
February
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2025-03-14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
We introduce LLM CHESS, an evaluation framework designed to probe the generalization of reasoning and instruction-following abilities in large language models (LLMs) through extended agentic interaction in the domain of chess. We rank over 50 open and closed source models by playing against a random opponent using a range of behavioral metrics, including win and loss rates, move quality, move legality, hallucinated actions, and game duration. For a subset of models, we derive an Elo estimate by playing against a chess engine with variably configured skill. Despite the simplicity of the instruction-following task and the weakness of the opponent, many state-of-the-art models struggle to complete games or achieve consistent wins. Similar to other benchmarks on complex reasoning tasks, our experiments reveal a clear separation between reasoning and non-reasoning models. However, unlike existing static benchmarks, the stochastic and dynamic nature of LLM CHESS uniquely reduces overfitting and memorization while preventing benchmark saturation. To support future work on evaluating reasoning and instruction-following in LLMs, we release our experimental framework, a public leaderboard, and a dataset of associated games. Our code is available athttps://github.com/LLM-CHESS/llm\_chess.
null
null
https://github.com/LLM-CHESS/llm_chess_minimal, https://github.com/maxim-saplin/llm_chess/
null
Workshop on Foundations of Reasoning in Language Models at NeurIPS 2025
2025
Kolasani, Sai and Saplin, Maxim and Crispino, Nicholas and Montgomery, Kyle and Davis, Jared and Zaharia, Matei and Wang, Chi and Wang, Chenguang
LLM CHESS: Benchmarking Reasoning and Instruction-Following in LLMs through Chess
inproceedings
kolasani:2025:llm-chess-benchmarking-reasoning-instruction-following-llms-through-chess
null
null
null
null
null
null
null
null
null
null
null
https://maxim-saplin.github.io/llm_chess/
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
State-of-the-art reinforcement learning agents are capable of outperforming human experts at games like chess, Go, and StarCraft II. These agents do not simply take advantage of their digital hardware in being able to react and calculate faster than humans, but employ better strategies that lead to more victories. Interpreting these strategies would give human players valuable insight into how to improve their play. In this preliminary work, we propose a symbolic sub-policy model for playing chess. Inspired by chess tactics, our model attempts to incorporate domain knowledge to improve interpretability. We adapt patterns learned by an inductive logic programming system called PAL to derive our model. We contribute a divergence metric to evaluate our model against a random baseline, and find a set of tactics that is able to suggest moves of similar playing strength to a human beginner. Finally, we propose a computational evaluation scheme for the model by augmenting an off-the-shelf engine with it.
null
null
null
https://doi.org/10.1201/9781003355281-6
Proceedings of the Explainable Agency in Artificial Intelligence Workshop, 36th AAAI Conference on Artificial Intelligence
2022
Krishnan, Abhijeet and Martens, Chris
Towards the automatic synthesis of interpretable chess tactics
inproceedings
krishnan:2022:automatic-synthesis-interpretable-chess-tactics
null
null
https://abhijeetkrishnan.me/publications/eaai-22/Interpretable_Chess_Tactics.pdf
10.1201/9781003355281-6
91--97
null
null
null
March
null
null
null
null
American Association of Artificial Intelligence
null
null
null
null
null
null
null
null
null
null
null
null
https://abhijeetkrishnan.me/publications/eaai-22/EAAI_22_Presentation.pdf
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Competitive games admit a wide variety of player strategies and emergent, domain-specific concepts that are not obvious from an examination of their rules. Expert agents trained on these games demonstrate many useful strategies, but these are difficult for human players to understand and adopt. Algorithmically revealing these strategies could help players develop a better model for making decisions that lead to victories. This paper presents a method for the automatic discovery of player-oriented strategies for chess. We present a formal model for chess strategies, inspired by documented chess tactics, that uses first-order logic clauses for representation. Our system uses inductive logic programming to learn human-interpretable strategies for playing chess in the form of our tactic model. Given minimal background knowledge and training data drawn from real games, our system is able to learn tactics that generalize to a large number of positions. We show that these tactics cover a large number of real-world positions and produce moves that outperform a random player.
null
null
https://github.com/AbhijeetKrishnan/interpretable-chess-tactics
null
Proceedings of the Workshop on Artificial Intelligence for Strategy Games (SG) and Esports Analytics (EA), 18th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment
2022
Krishnan, Abhijeet and Martens, Chris
Synthesizing interpretable chess tactics from player games
inproceedings
krishnan:2022:synthesizing-interpretable-chess-tactics-player-games
null
null
https://www.convivial.tools/PapersPublic/aiide22-synthesizing-tactics.pdf
null
null
null
null
null
October
The title in the paper from the proceedings is "Synthesizing chess tactics from player games"
null
null
null
American Association for Artificial Intelligence
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Smartphone usage data can provide valuable insights for understanding interaction with technology and human behavior. However, collecting large-scale, in-the-wild smartphone usage logs is challenging due to high costs, privacy concerns, under representative user samples and biases like non-response that can skew results. These challenges call for exploring alternative approaches to obtain smartphone usage datasets. In this context, large language models (LLMs) such as Open AI's ChatGPT present a novel approach for synthetic smartphone usage data generation, addressing limitations of real-world data collection. We describe a case study on how four prompt strategies influenced the quality of generated smartphone usage data. We contribute with insights on prompt design and measures of data quality, reporting a prompting strategy comparison combining two factors, prompt level of detail (describing a user persona, describing the expected results characteristics) and seed data inclusion (with versus without an initial real usage example). Our findings suggest that using LLMs to generate structured and behaviorally plausible smartphone use datasets is feasible for some use cases, especially when using detailed prompts. Challenges remain in capturing diverse nuances of human behavioral patterns in a single synthetic dataset, and evaluating tradeoffs between data fidelity and diversity, suggesting the need for use-case-specific evaluation metrics and future research with more diverse seed data and different LLM models.
null
null
null
https://arxiv.org/abs/2509.13892
null
2025
Gustavo Kruger and Nikhil Sachdeva and Michael Sobolev
Synthetic Data Generation for Screen Time and App Usage
misc
kruger:2025:synthetic-data-generation-screen-time-app-usage
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2509.13892
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
cs.HC
arXiv
null
null
null
null
null
null
null
null
null
https://osf.io/u2h3d/
null
null
null
null
null
null
null
null
null
null
null
null
null
Accurately estimating human skill levels is crucial for designing effective human-AI interactions so that AI can provide appropriate challenges or guidance. In games where AI players have beaten top human professionals, strength estimation plays a key role in adapting AI behavior to match human skill levels. In a previous state-of-the-art study, researchers have proposed a strength estimator trained using human players' match data. Given some matches, the strength estimator computes strength scores and uses them to estimate player ranks (skill levels). In this paper, we focus on the observation that human players' behavior tendency varies according to their strength and aim to improve the accuracy of strength estimation by taking this into account. Specifically, in addition to strength scores, we obtain policies for different skill levels from neural networks trained using human players' match data. We then combine features based on these policies with the strength scores to estimate strength. We conducted experiments on Go and chess. For Go, our method achieved an accuracy of 80\% in strength estimation when given 10 matches, which increased to 92\% when given 20 matches. In comparison, the previous state-of-the-art method had an accuracy of 71\% with 10 matches and 84\% with 20 matches, demonstrating improvements of 8-9\%. We observed similar improvements in chess. These results contribute to developing a more accurate strength estimation method and to improving human-AI interaction.
null
null
null
https://doi.org/10.48550/arXiv.2505.00279
null
2025
Kyota Kuboki and Tatsuyoshi Ogawa and Chu{-}Hsuan Hsueh and Shi{-}Jim Yen and Kokolo Ikeda
Policies of Multiple Skill Levels for Better Strength Estimation in Games
article
kuboki:2025:policies-multiple-skill-levels-better-strength-estimation-games
null
null
null
10.48550/ARXIV.2505.00279
null
null
abs/2505.00279
CoRR
null
null
null
null
null
null
null
null
null
null
2505.00279
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This paper presents a novel AI-driven chess engine that integrates a lightweight deep learning architecture with an uncertainty-aware Monte Carlo Tree Search (MCTS) framework. Unlike traditional engines that rely on brute-force search, our model utilizes reinforcement learning with a transformer-based neural network to optimize decision-making under computational constraints. We trained our system using high-quality chess datasets from Lichess and implemented Proximal Policy Optimization (PPO) for stable learning. Experimental results demonstrate that our model achieves a 79.5\% win rate, a move accuracy of 92.3\%, and a 37.7\% reduction in inference time compared to AlphaZero, making it well-suited for real-time and resource-constrained applications. Furthermore, our findings suggest that similar AI optimization techniques can be applied to other domains requiring strategic decision-making, such as robotic control and financial modeling. Future work will explore adaptive neural networks and energy-efficient computing to further enhance performance and accessibility. The framework achieves a 79.5\% win rate against Stockfish 16, with a 37.7\% reduction in inference time compared to AlphaZero.
Accuracy;Monte Carlo methods;Computational modeling;Decision making;Reinforcement learning;Transformers;Computational efficiency;Artificial intelligence;Optimization;Engines;AI chess engine;reinforcement learning;monte carlo tree search (MCTS);deep learning;transformer-based models;proximal policy optimization (PPO)
null
null
null
2025 4th International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE)
2025
D, Girish Kumar and Shiva Kumar, K S and Rama Prasad, P Pani and Jalade, Sangamesh C and Praveen Kumar, C T M and D C, Subhashree
Optimizing AI-Driven Chess Bots: Strategies for Balancing Performance, Accuracy, and Computational Efficiency
inproceedings
kumar:2025:optimizing-ai-driven-chess-bots-strategies-balancing-performance-accuracy-computational-efficiency
null
null
null
10.1109/ICDCECE65353.2025.11035271
1--5
null
null
null
April
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This study expands on previous surveys of computational theory of mind (ToM) focusing on four key areas. Data: We attempt to characterize data needed for this research and propose creating procedurally generated, multi-modal synthetic data for training and testing ToM systems, addressing the lack of open-source data of agent behaviors in closed environments. Metrics: We explore ToM evaluation beyond the Sally-Anne Test, considering child development stages and natural language understanding as potential measures. Model: We investigate building on recent ToM models, exploring open-ended learning in reinforcement learning, and applying neuroscientific insights to model architecture. We also examine ToM applications in everyday technologies, leveraging state-of-the-art transformer technologies and multimodal datasets. Theoretical Formalization: We aim to bridge cognitive science and psychology concepts with mathematical approaches to facilitate algorithm development in ToM.
theory of mind, game theory, multi-agent, machine-learning, artificial intelligence, intention, adversarial dynamics, computational, automation
null
null
null
HCI International 2025 -- Late Breaking Papers
2026
Kumar, Prabhat and Zaroukian, Erin and Summers-Stay, Douglas and Raglin, Adrienne
Directions for Computational Theory of~Mind: Data, Metrics, Models and~Mathematical Formalization
inproceedings
kumar:2026:directions-computational-theory-of-mind-data-metrics-models-mathematical-formalization
null
null
null
null
53--70
null
null
null
null
null
Degen, Helmut and Ntoa, Stavroula
null
null
Springer Nature Switzerland
null
null
null
Cham
null
null
null
null
978-3-032-13184-3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Acting intelligently in complex environments poses a challenging learning problem: faced with many different situations and possible actions, how do people learn which action to take in each situation? While traditional laboratory-based experiments have been used to study specific learning mechanisms, these experiments often employ relatively simple tasks conducted over a short period of time. Thus, it is unclear to what extent these mechanisms are used in the significantly more complex and temporally extended environments people encounter in their everyday lives. To understand the processes by which people learn policies to guide their decisions, we investigate the opening strategies of novice online chess players over their first months of play. We use a large online data set consisting of 2,499,783 games, providing us with the necessary scale to explore learning mechanisms in a complex setting. In particular, we focus on two types of learning: reinforcement learning, or learning from rewards given repeated experiences, and social learning, or learning from the actions of others. We show that players' choices are modulated by both game outcomes and observing their opponents' actions, and that they exhibit important hallmarks of adaptive decision-making such as exploration and expertise. Our results provide evidence that people use sophisticated learning algorithms in naturalistic strategic behavior.
decision-making, learning, reinforcement learning, social learning
null
https://github.com/ionatankuperwajs/learning-openings
osf.io/preprints/psyarxiv/d8zje
null
2024
Kuperwajs, Ionatan and van Opheusden, Bas and Russek, Evan and Griffiths, Tom
Learning from rewards and social information in naturalistic strategic behavior
article
kuperwajs:2024:learning-from-rewards-social-information-strategic-behavior
null
null
null
10.31234/osf.io/d8zje
null
null
null
null
August
null
null
null
null
PsyArXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Human planning is incredibly efficient. Even in complex situations with many possible courses of action, people are able to make good decisions. Recent proposals suggest that a primary contributor to this efficiency is the intelligent use of cognitive resources, but how people allocate these resources under time constraints is not fully understood. In this work, we conduct a resource-rational analysis of planning in a large data set of online chess games. We first demonstrate that players spent more time thinking when they had more time to do so, and that this effect was especially prevalent when computation was more valuable. Then, we show that additional time spent planning resulted in better selected moves when one existed, and compare between signals of general and immediate time pressure. Finally, we highlight the role of expertise in this setting. Our results provide evidence that people make resource-rational choices when planning under time pressure.
null
null
null
https://escholarship.org/uc/item/75b4m9c2
null
2025
Kuperwajs, Ionatan and Russek, Evan and Schut, Lisa and Sagiv, Yotam and Mattar, Marcelo G and Ma, Wei Ji and Griffiths, Tom
Exploring resource-rational planning under time pressure in online chess
article
kuperwajs:2025:exploring-resource-rational-planning-time-pressure-online-chess
null
null
null
null
null
null
47
Proceedings of the Annual Meeting of the Cognitive Science Society
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Online game providers face the challenge of preventing malicious users (cheaters) from breaking the rules and winning games through illegal means. This issue in particular plagues the online chess scene, where the strongest algorithms have long surpassed the world's best players - any cheater can beat the best human players through computer assistance. Moreover, recent developments in AI-based chess engines have opened the door to even more human-like engines, which are increasingly able to mimic legitimate human players. Unfortunately, because major chess websites do not discuss their cheat detection mechanisms publicly, there is limited scientific literature on how to tackle the pervasive problem of cheating in online chess. Certainly, there is no way to validate whether these mechanisms actually work. We take a first step towards formalizing a proper cheat detection framework for online chess by leveraging a large-scale statistical examination of human and computer decision-making tendencies over millions of chess games played online. Although cheaters are not engines (computer players) but centaurs (computer-assisted human players), the insights into computer play serve as a useful guideline for finding the strongest indicators of cheating. We then demonstrate how these findings may distinguish legitimate human players from cheaters in an automated, rules-based manner. Additionally, we argue that the status quo of hiding cheat detection mechanisms from the public eye is dangerous to the integrity of the game, and that cheat detection is foremost a service to society instead of a competitive advantage for chess websites to attract more users. Consistent with Kerckhoffs' paradigm, we believe that the benefits of an open discussion on cheat detection far outweigh the potential drawbacks of cheaters learning about these methods.
null
null
null
https://doi.org/10.1007/978-3-031-34017-8_14
Computers and Games - International Conference, {CG} 2022, Virtual Event, November 22-24, 2022, Revised Selected Papers
2022
Thijs Laarhoven and Aditya Ponukumati
Towards Transparent Cheat Detection in Online Chess: An Application of Human and Computer Decision-Making Preferences
inproceedings
laarhoven:2022:transparent-cheat-detection-online-chess
null
null
null
10.1007/978-3-031-34017-8_14
163--180
null
13865
null
null
null
Cameron Browne and Akihiro Kishimoto and Jonathan Schaeffer
null
Lecture Notes in Computer Science
Springer
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This article reports on an investigation of the use of convolutional neural networks to predict the visual attention of chess players. The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces. We have used scan path and fixation data from players engaged in solving chess problems, to compute 6600 saliency maps associated to the corresponding chess piece configurations. This corpus is completed with synthetically generated data from actual games gathered from an online chess platform. Experiments realized using both scan-paths from chess players and the CAT2000 saliency dataset of natural images, highlights several results. Deep features, pretrained on natural images, were found to be helpful in training visual attention prediction for chess. The proposed neural network architecture is able to generate meaningful saliency maps on unseen chess configurations with good scores on standard metrics. This work provides a baseline for future work on visual attention prediction in similar contexts.
Deep neural network, Computer vision, Visual attention, Chess
null
null
https://doi.org/10.1145/3314111.3319827
Proceedings of the 11th {ACM} Symposium on Eye Tracking Research & Applications, {ETRA} 2019, Denver , CO, USA, June 25-28, 2019
2019
Justin Le Louedec and Thomas Guntz and James L. Crowley and Dominique Vaufreydaz
Deep learning investigation for chess player attention prediction using eye-tracking and game data
inproceedings
le-louedec:2019:chess-player-attention-prediction
null
null
https://dl.acm.org/doi/pdf/10.1145/3314111.3319827
10.1145/3314111.3319827
1:1--1:9
null
null
null
null
null
Krzysztof Krejtz and Bonita Sharif
null
null
{ACM}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
In the modern age of computing and technology, computer vision has become a key aspect of numerous innovations and solutions that make everyday life easier. Object recognition in images is one area where computer vision can contribute to significant improvements. Playing chess, one of the oldest and most challenging intellectual games requires concentration and tactical thinking. In computer vision, automating the recognition of chessboards and figures can contribute to developing advanced chess applications, help users analyse games, or even enable a game with a computer based on real-world chess setups. This paper describes how the system for recognising chessboards and figures using computer vision techniques was developed and implemented. Through analysing existing methods, designing a new algorithm, and experimental evaluation, the goal is to create an accurate and efficient system that can recognise chess positions and individual figures based on images.
null
null
null
null
New Technologies, Development and Application VIII
2025
Leme{\v{s}}, Samir and Koli{\'{c}}, Mirhad and Tabak, Edin
Computer Vision for Chess Game Automation
inproceedings
lemes:2025:computer-vision-chess-game-automation
null
null
null
null
21--30
null
null
null
null
null
Karabegovi{\'{c}}, Isak and Kova{\v{c}}evi{\'{c}}, Ahmed and Mand{\v{z}}uka, Sadko
null
null
Springer Nature Switzerland
null
null
null
Cham
null
null
null
null
978-3-031-95197-8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This study presents a few-shot embedding learning approach to predict the behavior of individual chess players, based on only 100 games. Traditional models have relied on extensive datasets, often requiring thousands of games to achieve accurate move predictions. In contrast, our method leverages a limited number of games to generate dense vector representations, or embeddings, that capture a player's unique style. We trained a neural network to create these embeddings and used them to predict subsequent moves. Our results indicate that the embedding model performs well across various player sets and can accurately identify players even at scale among a great player population, picking out players with 84\% accuracy from among 100k candidates. There are indications that including information on the clock situation during the game improves the embedding process, although our findings are inconclusive. Despite these limitations, our approach shows promise in making personalized chess training more accessible and highlights the potential for embedding learning in human-centered AI applications. Future work will aim to refine both the embedding and move prediction models and explore its application in other domains.
null
null
null
https://fse.studenttheses.ub.rug.nl/id/eprint/34065
null
2024
August, Lennart
A Few-Shot Embedding Learning Approach for Predicting the Behavior of Individual Chess Players
thesis
lennart:2024:few-shot-embedding-learning-approach-predicting-behvaior-individual-chess-players
Bachelor's thesis
Abreu, Steven and Jaeger, Herbert
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
University of Groningen
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
This thesis investigates how neural networks can be used to analyze and compare chess player styles. Using the last layer of Stockfish's neural network, we process positions from historical World Chess Championships (1886–2024), as well as the 2024 World Blitz and Rapid Championships. We apply dimensionality reduction (PCA, t-SNE, MDS) and clustering (K-means) to build style-based player maps, measuring similarities through Jensen–Shannon divergence. Results show consistent stylistic signatures for some players--such as Firouzja and Dubov but may fail with others e.g. Nepomniachtchi. We also confirmed the evolution of chess player's still accros time. Though the approach is promising, limitations remain, especially in data size and the influence of forced lines. Still, this work lays a foundation for future stylometry studies and applications in AI driven chess training
neural network, chess, PCA, t-SNE, MDS, Kmeans, kde
null
null
https://hdl.handle.net/2078.2/42841
null
2025
Lequenne, Victor
Characterizing Chess Player Styles with Neural Network Embeddings from Stockfish
thesis
lequenne:2025:characterizing-chess-player-styles-neural-networks-embeddings-stockfish
Master's thesis
Delvenne, Jean-Charles
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
\'{E}cole polytechnique de Louvain, Universit\'{e} catholique de Louvain
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
FedCSIS 2025 competition is to predict the difficulty of chess puzzles, we present a structured multi-stage regression pipeline developed for the FedCSIS 2025 Challenge. The approach consists of three stages: (i) four Elo-banded base models trained on separate rating ranges to capture localized difficulty semantics and mitigate bias in imbalanced datasets; (ii) a feature-level stacking ensemble combining base predictions with structural attributes, such as success probabilities, failure distributions, and solution length, to enhance cross-band generalization; and (iii) a lightweight post-hoc residual correction to reduce systematic prediction biases. Additionally, an uncertainty-aware mask-based evaluation is introduced to identify the 10\% most challenging puzzles for extended scoring. Our method achieved competitive results, ranking 7th in the final leaderboard, while maintaining low computational cost. These findings demonstrate that lightweight, interpretable models, when combined with structural reasoning and uncertainty estimation, can rival more complex deep-learning approaches. This study highlights the potential of structured machine learning pipelines for scalable, human-centric chess puzzle analytics.
null
null
null
http://dx.doi.org/10.15439/2025F1698
Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)
2025
Alan Liang and Cenzhi Liu and Kai Wang and Ethan Liu
A Stacking-Based Ensemble Approach for Predicting Chess Puzzle Difficulty
inproceedings
liang:2025:stacking-based-ensemble-approach-predicting-chess-puzzle-difficulty
null
null
null
10.15439/2025F1698
819--824
null
43
null
null
null
Marek Bolanowski and Maria Ganzha and Leszek Maciaszek and Marcin Paprzycki and Dominik \'{S}l\k{e}zak
null
Annals of Computer Science and Information Systems
IEEE
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Recent large language models (LLMs) have shown strong reasoning capabilities. However, a critical question remains: do these models possess genuine reasoning skills particularly complex strategic reasoning or are they primarily excelling at sophisticated pattern recognition within their training data? To address this question, this paper presents a chess testbed, ChessArena, to evaluate the strategic reasoning capabilities of LLMs. Chess requires complex strategic reasoning capabilities including long-term planning, strict rule comprehension, and multi-turn conversation memorization. Specifically, ChessArena is a competitive framework where LLMs play against each other, under four different play modes. The testbed is equipped with a ranking algorithm and a leaderboard. The testbed can also evaluate fine-grained capabilities including basic understanding, move selection, and puzzle solving. Over 13 LLMs with different modes are evaluated in ChessArena, playing over 800 games. The results reveal significant shortcomings in current LLMs: no model can beat Maia-1100 (a chess engine at human amateur level), while some even failed to defeat a random player that selects moves arbitrarily. We also present a strong baseline to the testbed: our fine-tuned Qwen3-8B substantially improved performance, approaching much larger state-of-the-art reasoning models.
null
null
null
https://arxiv.org/abs/2509.24239
null
2025
Jincheng Liu and Sijun He and Jingjing Wu and Xiangsen Wang and Yang Chen and Zhaoqi Kuang and Siqi Bao and Yuan Yao
ChessArena: A Chess Testbed for Evaluating Strategic Reasoning Capabilities of Large Language Models
misc
liu:2025:chessarena-chess-testbed-evaluating-strategic-reasoning-capabilities-large-language-models
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2509.24239
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
cs.LG
arXiv
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The FedCSIS 2025 Challenge on Predicting Chess Puzzle Difficulty tasked participants with estimating puzzle ratings directly from board states and solution sequences, without relying on human solver statistics. We propose a three-stage hybrid framework integrating gradient-boosting regressors, a multi-modal neural network, and an XGBoost stacking ensemble. The boosting stage modeled handcrafted structural features derived from FEN and engine metadata, while the multi-modal network jointly learned from structured features and image-rendered chessboards to capture positional and tactical patterns. The residual-based stacking stage explicitly modeled prediction errors to correct systematic biases and enhance performance, particularly for high-difficulty puzzles. Our method achieved a competitive performance, ranking 7th in the preliminary stage and 8th in the final leaderboard. These results demonstrate that combining interpretable boosting models with visual-tactical deep representations and meta-learning provides a robust and computationally efficient alternative to large-scale transformer-based approaches.
null
null
null
http://dx.doi.org/10.15439/2025F3675
Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)
2025
Ming Liu and Junye Wang and Yinghan Hu and Xiaolin Yang and Defu Lin
Hybrid Boosting and Multi-Modal Fusion for Chess Puzzle Difficulty Prediction
inproceedings
liu:2025:hybrid-boosting-multi-modal-fusion-chess-puzzle-difficulty-prediction
null
null
null
10.15439/2025F3675
825--830
null
43
null
null
null
Marek Bolanowski and Maria Ganzha and Leszek Maciaszek and Marcin Paprzycki and Dominik \'{S}l\k{e}zak
null
Annals of Computer Science and Information Systems
IEEE
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Abstract Gambits are central to human decision-making. Our goal is to provide a theory of Gambits. A Gambit is a combination of psychological and technical factors designed to disrupt predictable play. Chess provides an environment to study gambits and behavioral game theory. Our theory is based on the Bellman optimality path for sequential decision-making. This allows us to calculate the Q\$\$ Q \$\$-values of a Gambit where material (usually a pawn) is sacrificed for dynamic play. On the empirical side, we study the effectiveness of a number of popular chess Gambits. This is a natural setting as chess Gambits require a sequential assessment of a set of moves (a.k.a. policy) after the Gambit has been accepted. Our analysis uses Stockfish 14.1 to calculate the optimal Bellman Q\$\$ Q \$\$-values, which fundamentally measures if a position is winning or losing. To test whether Bellman's equation holds in play, we estimate the transition probabilities to the next board state via a database of expert human play. This then allows us to test whether the Gambiteer is following the optimal path in his decision-making. Our methodology is applied to the popular Stafford and reverse Stafford (a.k.a. Boden–Kieretsky–Morphy) Gambit and other common ones including the Smith-Morra, Goring, Danish and Halloween Gambits. We build on research in human decision-making by proving an irrational skewness preference within agents in chess. We conclude with directions for future research.
adversarial risk analysis, AI, AlphaZero, behavioral economics, behavioral game theory, behavioral science, chess gambits, decision-making, deep learning, neural network, Q learning, rationality, skewness preference, Stafford Gambit, Stockfish 14
null
null
https://onlinelibrary.wiley.com/doi/abs/10.1002/asmb.2684
null
2022
Maharaj, Shiva and Polson, Nick and Turk, Christian
Gambits: Theory and evidence
article
maharaj:2022:gambits-theory-evidence
null
null
null
10.1002/asmb.2684
572--589
4
38
Applied Stochastic Models in Business and Industry
null
null
null
null
null
null
null
null
null
null
https://onlinelibrary.wiley.com/doi/pdf/10.1002/asmb.2684
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
In the intricate landscape of game-playing algorithms, Crazyhouse stands as a complex variant of chess where captured pieces are reintroduced, presenting unique evaluation challenges. This paper explores a hybrid approach that combines traditional evaluation functions with neural network-based evaluations, seeking an optimal balance in performance. Through rigorous experimentation, including self-play, matchups against a variant of the renowned program, Go-deep experiments, and score deviations, we present compelling evidence for the effectiveness of a weighted sum of both evaluations. Remarkably, in our experiments, the combination of 75\% neural network and 25\% traditional evaluation consistently emerged as the most effective choice. Furthermore, we introduce the use of Best-Change rates, which have previously been associated with evaluation quality, in the context of Monte Carlo tree search-based algorithms.
Crazyhouse, chess variants, heuristic evaluation functions, neural networks, Best-Change rates, Monte Carlo tree search
null
null
https://doi.org/10.1007/978-3-031-54968-7_2
Advances in Computer Games: 18th International Conference, ACG 2023, Virtual Event, November 28–30, 2023, Revised Selected Papers
2023
Makovec, Anei and Pirker, Johanna and Guid, Matej
Merging Neural Networks with Traditional Evaluations in Crazyhouse
inproceedings
makovec:2023:merging-neural-networks-traditional-evaluations-crazyhouse
null
null
null
10.1007/978-3-031-54968-7_2
15--25
null
null
null
null
null
null
null
null
Springer-Verlag
11
null
null
Berlin, Heidelberg
null
null
null
null
978-3-031-54967-0
Siegen, Germany
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null