Papers
arxiv:2512.00003

Efficient Turing Machine Simulation with Transformers

Published on Sep 28, 2025
Authors:
,

Abstract

Constant bit-size Transformers can efficiently simulate multi-tape Turing machines with optimal context length and reduced reasoning steps through multi-queue TM constructions.

AI-generated summary

Constant bit-size Transformers are known to be Turing complete, but existing constructions require Ω(s(n)) chain-of-thought (CoT) steps per simulated Turing machine (TM) step, leading to impractical reasoning lengths. In this paper, we significantly reduce this efficiency gap by proving that any (t(n),s(n))-bounded multi-tape TM can be simulated by a constant bit-size Transformer with an optimal O(s(n))-long context window and only O(s(n)^c) CoT steps per TM step, where c>0 can be made arbitrarily small by letting the Transformers' head-layer product sufficiently large. In addition, our construction shows that sparse attention with fixed geometric offsets suffices for efficient universal computation. Our proof leverages multi-queue TMs as a bridge. The main technical novelty is a more efficient simulation of multi-tape TMs by synchronous multi-queue TMs, improving both time and space complexity under stricter model assumptions.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.00003 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.00003 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.