EquiformerV3:
Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers
This repository contains the checkpoints for EquiformerV3, the third generation of the SE(3)-equivariant graph attention Transformer. EquiformerV3 is designed to advance efficiency, expressivity, and generality in 3D atomistic modeling. Building on EquiformerV2, this version introduces (1) software optimizations, (2) simple and effective modifications like equivariant merged layer normalization and attention with smooth cutoff, and (3) SwiGLU-S^2 activations, which incorporate many-body interactions and preserve strict equivariance. EquiformerV3 achieves state-of-the-art results on benchmarks including OC20, OMat24, and Matbench Discovery.
Please refer to the official GitHub repository for detailed instructions on environment setup and usage.
Checkpoints
MPtrj
| Model | Training data | Checkpoint |
| EquiformerV3 | MPtrj | mptrj_gradient.pt |
OMat24 β MPtrj and sAlex
Training consists of (1) direct pre-training on OMat24, (2) gradient fine-tuning on OMat24 initialized from (1), and (3) gradient fine-tuning on MPtrj and sAlex initialized from (2).
| Model | Training data | Config | Checkpoint |
| EquiformerV3 (direct pre-training) | OMat24 | omat24_direct.yml | omat24_direct.pt |
| EquiformerV3 (gradient fine-tuning) | OMat24 | omat24_gradient.yml | omat24_gradient.pt |
| EquiformerV3 (gradient fine-tuning) | MPtrj and sAlex | mptrj-salex_gradient.yml | omat24-mptrj-salex_gradient.pt |
Citation
Please consider citing this work below if it is helpful:
@article{equiformer_v3,
title={EquiformerV3: Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers},
author={Yi-Lun Liao and Alexander J. Hoffman and Sabrina C. Shen and Alexandre Duval and Sam Walton Norwood and Tess Smidt},
journal={arXiv preprint arXiv:2604.09130},
year={2026}
}